var/home/core/zuul-output/0000755000175000017500000000000015134172072014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134204424015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000324571615134204261020266 0ustar corecoreqikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD P ?YI_翪|mvşo#oVݏKf+ovpZjwC4%_̿f\ϘקjzuQ6/㴻|]=ry+/vWŊ7 .=*EbqZnx.h{nۯSa ׋D*%(Ϗ_϶ݬvGR)$DD D~m{]iX\|U. $ॄKЗ/83Jp ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*ͼ~aT(;`KZ)&@i{ C2i1Gdē _%Kٻւ(Ĩ$#TLX h~lys%v6:SFA֗f΀QՇ2Kݙ$ӎ;IXN :7sL0x.`6)ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`岮X7̝4?qKf, # qe䧤 ss]QzH.ad!rJBi`V +|i}}THW{y|*/BP3m3A- ZPmN^iL[NrrݝE)~QGGAj^3}wy/{47[q)&c(޸0"$5ڪҾη*t:%?vEmO5tqÜ3Cyu '~qlN?}|nLFR6f8yWxYd ;K44|CK4UQviYDZh$#*)e\W$IAT;s0Gp}=9ڠedۜ+EaH#QtDV:?7#w4r_۾8ZJ%PgS!][5ߜQZ݇~- MR9z_Z;57xh|_/CWuU%v[_((G yMi@'3Pmz8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n,50/mx'})')SĔv}S%xhRe)a@r AF' ]J)ӨbqMWNjʵ2PK-guZZg !M)a(!H/?R?Q~}% ;]/ľv%T&hoP~(*טj=dߛ_SRzSa™:']*}EXɧM<@:jʨΨrPE%NT&1H>g":ͨ ҄v`tYoTq&OzcP_k(PJ'ήYXFgGہwħkIM*򸆔l=q VJީ#b8&RgX2qBMoN w1ђZGd m 2P/Ɛ!" aGd;0RZ+ 9O5KiPc7CDG.b~?|ђP? -8%JNIt"`HP!]ZrͰ4j8!*(jPcǷ!)'xmv>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=d]' =MS2[3(/hoj$=Zm Mlh>P>Qwf8*c4˥Ęk(+,«.c%_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGt?~=ˢ>f>\bN<Ⱦtë{{b2hKNh`0=/9Gɺɔ+'Х[)9^iX,N&+1Id0ֶ|}!oѶvhu|8Qz:^S-7;k>U~H><~5i ˿7^0*]h,*aklVIKS7d'qAWEݰLkS :}%J6TIsbFʶ褢sFUC)(k-C"TQ[;4j39_WiZSس:$3w}o$[4x:bl=pd9YfAMpIrv̡}XI{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qb㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dc}QQufCdX($0j(HX_$GZaPo|P5q @3ǟ6 mR!c/24مQNֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2CPE|ɧJ~iM cO;m#NV?d?TCg5otޔC1s`u.EkB6ga׬9J2&vV,./ӐoQJ*Dw*^sCeyWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#v| (~~yZ-VW"T- 0@4V{g6R/wD_tՄ.F+HP'AE; J j"b~PO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&AٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4!.zɪ) ӓT)D:fci[*`cc&VhfFp佬)/Wdځ+ uR<$}Kr'ݔTW$md1"#mC_@:m P>DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { O;Fq->s!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2AU9c>Xf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%_wR Q,TXj~%>"E FnԴ_faqPU嗈M9VS;a+Mqܙ7748zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQ'i,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81YꐌSYXzv9[ezksA`<dkONo_ULT.ÔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0~oI`b#FOf_$0!i rS/wvҍ%Eb/Ec|U9F-)L)ŘF`U:VK jeFrԋ7EDYpԽ.D\dNyj荊EEg]bÔF˩ք%EGƶ*NX)Hc(<|q@Oޯr^3>Uf1w;mCja:-1_k٘%VbZ˙#G6 `q+MPU~l!.?I_Pĝ"] rT [eTr؟˰ ]\ h! v˱>5S1px fnk}sեۂ|6d6D v-':<' pb>Gȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~)O⠫RI,r̉BD_lܻ<+ƾtf%܊ܘmBUȫ#vPk3cDRm=Tp2E޷ s1PH9M.~9gs`5sB>8 O3E*dt:|X`Z)|z&V*"9UR=Wd<)tc(߯)Z]k5>.1C( .K5g&_P9&`|8?|Ldl6oAMҪ1EzyNAtRuxyi\]q_?!zk.)ǟEu{_rjuWݚ;?*6mMu!RgQWR=VfmyanUn.Uqsy.>w8Wq/W-[~*;n5 |q|w.dަﺔ'oY?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăi.ԡD6Xb]eS5HNebLϧB|+l_cX6.v@~O98Vc0lT cihU9P!`Nz TtƩG /Y1puOSvBPltdr$i+tHi >] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ێ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ& (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&bX[cO;R`RA]d+w!e rr솜[/V`+@;Τ`5d0ϕ_Lع`C"cK>JG.}Ε00e>& 2䯫vNj31c$ i '2Sn-51Y}rE~b>|Ď6Oj~ebIapul9| 3QtUqSCxTD7U9/nq.JYCtuc nrCtVDƖϧ;INOKx%'t+sFUJq:ǫf!NRT1D(3.8Q;І?O+JL0SU%jfˬ1lމZ|VA/.ȍȱh M-r ~[0AG꠭y*8D*-Rz_z{/S[*"꫒?`a;N6uilLn<Yllmb rY״͆jqTI!j.Pٱh s!:W_´KxA|Hk1nE6=W|$O -{]1Ak$ ѫQ6Plp;3F$RveL l5`:~@c>q,7}VE-Q8W70up˳ A¦^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?çMR3V:<}xXh//T+coY5Ȧ4/m0NE(G2[+G~H'5ipӘ͏O +Px SPp.,?Uv]mo۸+F/-J"%pӴݜmXDٺ%U/NHIxmj<3q\#UjӂW˛b7?M3M"u'&ҽͱ@U]A\:G8S'|TxI%l\Tӽ)(_ޓ,Rz?!vj_%L d|GoI%e#Fw?y-bBƂƗ/gP ^fhse,8scUJX̉\ݿy#C/xY\ٜ)'*4chcGad <#GN,ir9`\ɤn= +F3 sP;gޫе>7lUhXexOF1bԤ`E<}wǓ" ֿ{<E{v& ^2 mr^ϵXYzA:y?G< "%'~;M]Sw{IzqE|Ǐ"S>"fp '-4$gq<"|wL粘,4FިKMEF(M!Cr.M9`K4f8,c"&J0LpzӮw,aٮpm@7r<^KL{oǏtKئr[DH~}y#x/=Ė/;ludrٽ hL/a'eSwd,뉈гA>K`R~(cAd9LFNXLl@%"Uu!n*/w3Pl; }g5/(ʼkʟŎ {gs尐Yg XEiٶE;Q8dg5i2͐~FAuNǓXn~WD(Q4e: UW, 0q l= B]qXwdr:2kF ʛ̕P(P*EdW̒jTRAQ5˛ħF(E"GL kT*l;Uu\ѬFyXMjY]Vv`,qKr9KoNFq^0$V ;-5okRԈ%G7auafFf*W2$i,<_@ (n/K x㙥_iZW =P~ŀŀ/(嚞's9U -xtT q7 ~y Rd#ǿ<>G_"}}n8.Qw`<_[5:xJ !ÅdT*q+AY(/E,*_WYB5薞͹r;n.Vb #<ڶרpefzbx=|^T}ggi\P֠] 'Lo//~;[&/SUIHlk@ݤ9:;9kӔKcS,>m:9o$o Vz*?*;exIY!&_@\{ɳ*OY wUؔI}s; n(|¸Md17ÿA:/( w7-=;]E*oPQ-ni|םVK 0_90|OWl:ztmR U vMt 3:Uu٠ZoRchLwXU4c]<꫼Sa=I yK3q< a:e,xQt3uQ< ܶejz˛P9Cxo -/?'e4:< ZEgTG)&!P8Gu)ᬟG3 ĿMs YMGSu?G/%zveo\f[YQf0\?ͳ&;vj C0{(Z;Uҩ(P\fmoT췼73#(Q%hdE<$$)dg$(-3}SbC(BaG";K%'sly h(`!jS f >ܢ&$V,cifP,y!\1dԇ$ڲ|K82W/Uf\`症L+x`͊{ 8<(7mD5LV* ϴGdT8@`$Q`Ry80M9OSIDbR5~cG :\d>}FBL`ڮQRoN[]&$Lp[^-It/c&w"'0 $R#6!U-׼/6w4 Bi::YbX0˝Mh@"5rֆF8kj,S>AguHõ/3<3hTyyۻ}ܞr%) En.y 8|9lSOK= K< }H|wS@hX'g 喘Ѫ ˆ\zxŒDkvwh=w'tW _Ecqe|#}=ֽY' qGmL4'g8tm.o 8${wfT-mV4K Q3(GCkk'V5b 57d]CaD UMh]Bf: e$i.:߈8n:C"g IO;~:`22,^'l"9jI-й'V\M}-;݄fa=T,>w-ݫAxn7xKdbCbIi kZX޷{dNr7LpS*.eV%{=̔80ogY voI][605oeַQ]Spk07,FZ5]|L}ڋW>M'L]dc䫼1*292˴0@e.џ,ΌzvtxD &6e#wCcbzu+=5=Y$DۛI)tA,h+ Ծ΃&&Zt4.z7<ԖMhf b7I7ѫU~Ux´kcD+yt+Fse!ä1*O Mt][굡n %45E0m%Ӧ ؎eb]`)E6׫TO-ܷ9ϵYQg۬{\+PtL]0m+V3 j %~m¥2Df: ϒXUє)͊9.!N cEL+GC)!s.ƻ%;?$gba1)nv 6HƁAJ3pf۶V/A4: 1D^|S3HmZ}) +^.t1]:QoK 8̷j%ƌ6ڂm{W/^yzwl\xu >ɫ$:,gaN!)gj[V&Ir(=ݜnSDV%1t3z|ύ*Ċ 0Ц}x7n-O[\L04$n+Vfms,Kq,]ۋ# |;V,]sCוg'L֙dwMKCp AXZ m x@]bM~ꪌ[RaL ҹ 1$|G#ϯy5P5u:dYz61u$[E Q\17$X35i}L‡x|& m4 r$z37ץlI4}2Z4L"P7$yDr.VbڲRޡw]i4zg: 5zzxt+!Ǵjv;N^>-,!R6$[_e'e, Rom$ w:=;ySCN.T* yt,;&YXpѤ;sMeY-sƷq8#[W 5 "a,¸7?_{'hɾUu.T4D޳HWض~ =Y]{M=}-X 1,Yl Tݚ4gfg307Yx'GB(tyA@MlD6`2+U\UCɫMk_аY6H5Ʈi՝W߀XIZψtzֽۃ10rE*֓+~ƥDS/u^i>Dyy ʐx&*q*-I#g(bҗ\,9&cd}sYg͍ݯL[6WW뱵n![|C֮5}Pz#^ߗ(@C+=`Caض"wwij| !#|7ܑkjV܅kAr `!'߿E0\8ؽݸk#aq*@q]Pv3'Pw"{O ;kmIA`w/>q8cƉ׎ER1'%%?nRYM 0;cdwUbٕjo J/n_B A?FE?"D(^@t}`Y?nfF(b˃7O= ĝ,qOoῗѵѵɭJE\@X .Lho9F(jl3r95JnC@ hWm?g30>#t\eYh/wB$x%:蔍:g0drSV|0Q } ,,vziΒEY]e841XHXFǬ/mn&&od¶QErdeҫl`A6am1gWW$=V̪}}NW_Y kjy͛a(ÅD^]*\O{wGfVzi4^-DmCJD5FK*(9׃EӉ<2s:^L?ETѬ׳~(vV>q]{G^ xt7,NJ֔t~357TIT&xi]DڛlUmF_#OĪk$^|][YZl $ bZk"o߿ߵ?}8?x˓ȿ{03nlr#m>n-hFfLQc}> ^xKt^k xÀ&)+/ŕۗ"1w,.ai7c}1[*`1nSW?ӫ*HP f.t{9X :x_66'~Ik) ݟ?_4:$^ 󾄶-;,A@ (R= P f陶;r0~2y8PA`˷"%D-g WkEe:^`nPu' ?=]_ oN„1<BdB(ǜ i>u5'_ Z{B ;Cp::r@P溋C{'L/Á  ٭1ge،LNR+<.D z`yɞ M{,<e[􇡺 ߀Z91`? #L;uօYVeTxkYTj"̮ShՃPY6=(a9 ^uMfYD)wmmXQ'txH}w׏}Y Q2+N،قC!C|]&6Q/A 'JC9V(Mt9p8\/r<;#̷da ^E tj_dGa8Lp%몣F?{ܒ6}M;.$, x)<GDMq jdcБ8 v sw|>Up 3CWJ/aQ J X_g_n$qP;pia/^TQE&JZ۷FƂF˕wh}1sڞkA%,[!$O#6 עs>Z`<~L *UhH6#ҡie7@\:&Z5&Vݕ^iƭ7Qٓ`NGs(cFtnDd)wpË$llؒҷ)T%%+ϳ`|Cep=hS({ld$z?R.Dao\˯OÛ)Zy_Q!%,Cj]0:DyB,X :3M>\ޟ\l;&;Xu ax+ L;Y^T2gY[X XrG?714hl[N# WD_v6T ,E| 㳏ipo kn1G/%uۤ~NR}?[Q]+@SUQ:-kǰ ,F$+)%EkG7ȻGo;$ȴf3淟_|9..$|CjŗnJ.>\UiRߠ"c)LN:yqS59{%xϣNIHMۄ&\L\7GRF(I@ҼN/n_M_ꇹ>+DaKU\ȃZbh,.J[&\w;E><,;#Dcd>[dGՈc%p F~վaj+#9:¨.WJYJ^tHY#j0 F(A4Ye7:~ق+jP"UV#{؅Nf1 8QQd$Jf[SN }XEc]k* XsgN`%uuMHަs s'i!wq7홟&h:T֡Dn`3gS)ϞOeRfxt"!KCzd0}sYW MϚ/ў_~O -^5{]7expBϚȑnB_Z|;08&>C ?GQr*/6nN;Oƥ:0hɪ/28<3TLcT KLmO$C !$ܤt1Ƞڬ]~(6QRuw W/k2#M9޵@JY|t8e+RO.N֫ϺZcHV^x4zF?=TdFհkGVzgf "kʻk߅]Y {6G?gs!U4OTLE.6BUūFҧPrn7PNE%]kgW]˾Heٮ8EJI:Z|!zSV(Gc{q]mO;YlHީ>7uNX;~)YSd#]у+DH&W`J橸o\Cg`b+t+*WmC&W`*wC<yyMI84Y;4p9,&1TWص~R&ݼW<1:A81KyD!uѨ J@V[gDxVX ti7dyܲ<(TmC6kmX%gO(R(r1mWE*%*?/Ic"Ȗ,_>ܝK?뻻8ryRxw7|s;XN<6gsӽwlvV0߯ltGrRs( /sg\e ۦ?nxc}kW?@ ;%mOn;sxOizC{`虺Ŋ1}Wt=g ߄u#& W%$'*{B}NNOM%ik3àAo F\Xm~|6" <&;Ƌ&&i5 Y?GP11A%&3`R,\xKi?+8gDiA A%!Z x?ܮ"О۲tʨI]- hX\ @Gx?|pzy$T% (o " hJ eHw)tώ'x栙C=TX LFSG ^I~ fh2pZx WuPƤDईՎdkon? M+RGҁ*h  gߊיP$3wII1Gc%GBV+Ĩ`18s؇+0D͎^vjt ]zz<ɨҖD;IDn(pb(xDDca5xx6 # 8$ς}9xXg;HnF;eAN פ%M=R$\[=GCBq߽ xJ 4)p|~}聣lYZ DW@Zt5/`f(bz;5&*FńiN[:*Dg.2rkЗ<4:»ghob$Gr6XɓRe謆`cnݨ&;_z;`Y0ӤڹG5wD _7اج _lp`{x. NPhK_0ahR$_kGX0Yȁ%- YLTq$H:q}GcϪGkZUntc݉I.W yi<,2nZB6w֏EqpP\h]Y!L (o(fG:mɼf52d!eX{v $jR>(Z,C/d)T HBU&Њ )䃲qt"bh"+om|mסttZτ5}vN4X8Mt\?n؏ѴjjZamr,p~ {MP,PJUnyl!8j%198txB"阾Fզ Ν?7&$g0܋A8/^fCPl!іyѯh |@_Xq,#l63sJ} \Z+q&ҧ>z?[^&T .'<N򨒯%+&Tdn-:/*'^-1,oӝC>utS:7%x{9TwVϙ6´Ggdnft?<6Dn:>@ BpgƳZI$F9KF'qg8%څ8z.6u̎YK맨XP ՋB,tQ$ CvHZty,+X9'Y|o:W0f|xhȍ =LN9 Ɯ?eYј̚)L&Ҟ؁B!~X*LZ8y60!j]v2k!bG¨dy-\ Q"-9Ou aftS۱Vv֙K.On;}j}Qӽ1 h7.0_.8O]??}yb˩pηLuгgp)pϋ٬7]<2<9|Weȴ@Xhy /^.#H:WMXr-C`3_2BxTb! %eB _4rzprԐFx;'KL+)L Rn /t͟3;X! ta0hf^=3__㶚`*w8)f8޶wDϋB37vYlGi<%jE-᩾p`L["˯,iv{'NJ{/kx*WNJ#`m)JHR#Ą+AtX?Mש ҵ(4 WuP1)t " VWՇ OHef@lʟ N 5ЎIǜq g'NM:])ycgE{Gy?k1TCNX#tFaԄgr~ JqYIGm i|[^:ttZ#8K+J63aokbERY%gz7|4%?{x(*wu|Nb|˙)1KBxYIǤ. Dvuh5XNs ܈FKk@ U4cI))on,+^jt# ]b 2EkBA7N&XJcEL7p4qЅw[@@y vʹR$jx}dJ#mSKlu-%8i"n?@7k~G4zCrGAN-)w7b7FT/5l 4ڱ7J/V[Y7%.êDT/=@yRa;TZ{ƎPhw+({@'WYVxS WmyP3%*!˒ܘSu ҝ!`ބ~j8ۥWZH.y za[Jw6_lOLX.v^r v9)CG}p™qNwG.-x;]߿wmI l. ~?A`e #u C3Eje~38$%pHBawͯՍ7 羢usF3]:w(@"VpK\QZvq(v*ChN@R 1&z#o=Cݕ5u$vkEs=YҺɽbiCjJBWb!fu1[ƛ. FZ1wQVɽI3}FAshZvl_6DIߏ&\{N:H¨c ѺXrD]^駌]yBưP9UlMNrF:j_4R=7żM_!t{`+mG=,V]2_]MFY[_dMײjWta&Hx.lre]_#8J &F#.t g crC7iu_~=<ƔD=-d64䞫nWF˫Z\џpqs+C ѥwhiwf}ǙU/2%'NY"ϽMq^JI緉n,τ9i x>:_b22u?8\$Cp\ƗxT.%1㱱?j5\2Sp%wP K0/H?:6-9뼃Ì>LBa~[`\˷) _gOs$rDCQEzyԲo83ϨIp&~̇:o߰S9d'EK'j̖۫tTޞuÁ7?_V@Bg<Q+;a;WKL`ҺW +Z R E!jh'XVebf8ON>g lM"ep (]G -f~:sx/uSv}g.^&̇¬ο1zRdW~z f\?ɭѽ*>?%<=DSpR0a}me 8.q `??2QD\ O+NO@$X.RF7_hK~;wz0O"+{1?Fe|2NAq?]e3W`C_ uzuuB {=xhodE``!1\24%o|Od hfl/v#?6߁e{jlV UEϵYiΣ*ø?BԙFmi)KT[+&]= D0MU;R;0jˋA|2(R7:1Vj嵖3K)p#?Đ!cvl>Z1Ǯ֢}:B?fUsk?'Z`@/{hGVF-_3A-ӪI:х @ˮaxz5lf&Q?t ٵ闋fa'Re  hB(c3q-VV[q_$dxeb݉;_ m^f׸nb9g䏖blk-d$.k説# RԤb7uppK4B SCvy$`~/XNTKNxYg1?ۼ G6G$x%%c֕T1SF;R&Dca19M> IC=D9J6^u'dNx WZlZ2֕V%*8N@*%sJe*T*m@3/ (厥1ޣ%D$Kƛ%Pz]y=]y'يLroG8}*[+ۗ{;j7-՚O=ΒjP,e* P!SuiQx3|~)Wm^4&?_hKPeWq~v8B_Az<(P>!檐K#8_0-[eZv#?6~-q/_:Imj hqZYQDx P Cqi2B!F3{N&[ili+2ҨEy! XSښPA܉b]Xr 쳴DY*`~%SlS%Ui (8bwXfIEj5ݱ[%'bי[Q+ѭZ8nЊ* ލtr&`x6h%,iiP ,MI̳G`fQUx/9e;6^k[Z(nk= d<ɱc2!J6Ku1rMgu1-J $ d>v++;T&k2=և"WmƖRI ̏wLa>\brVVKs9ETL U<"vke9;;C+%;+Jc, W"40-hX("4nّDR8} "8&"l$JjiJA8uIDqџ5i!%RqN#NLOBF$E YX :8L-6&+8ER  vv T|n8;!58b4)I#QH^[bDdN0eg$:fqb)XGX"b(p$Q I"Mhס)QNiN:'5%nRW"nÒ1"!֊`ڰ7xV)[p*L H3\B 5Q 61]Z$ l>A'[{6pWF)FKڵь\ӝ[{ ZPk8&HCC$!lp\pNDAm"b*6AP0jZ3]^=F*Z# QY[~$Mspio32h(yQD\ӨϷК1ڟlZg)T(p\("17Dr p#L&SJ)d1}QWqܧRk@(p&c8ZiLb𙄵-();1U^IU,N,ƩR6}eKZ7"Xu3r9Z7a/i`I#]b5ei5`J Fh45$M% BIDǑNQd4[/6*DGbo1&Y ˑWFti|9[k݆ iK2%cݟDhFpI˻;hDF_Ҡ4'mIbݱ4!"!Aɝ>0J3XԄL7"K:5CʄzdWՄpyW$.P3;~،\2ۘǚm;mb Jz4ff9r.=:ܥ8$Ƙ98(G5 :UM]}5>*K-t_U%F.OBmVp |Yk2±&P:\f=6Ik{(Wp}t\h,ץEa%Wڌd`z|%2JU^ >/?(YkKZ?|9a*m=xBM1p%P5WljBi%wZh՝%3hB"Dj]o7#u\MפVbR@uLv*Tk^fQ#=(rOWL+muŨ0#̯FO)S_FŔ.gO+ŕ"H͌mXPejBΕf#at4p1fEN@i+DXi1WTAإ5 5%wh y Tp9Y?xkJ:h;r/zw-Sf9Z88|=.O&3ǫR ׁA`8}靓|M'P.E6 |wj:K1S(xc%4ç W6)UWSrMdw*<; kÔ%hS7\#q`2U Uerhl޴Uò)hڴSYdڼ{eiH}|^_OǾ/C?&#nYwKW/0\YA \qx+eh$v6~A+Pgc3fWgp%OBoC2@r_VME&Ӽ 2XGj3P]ݝmRu_+Uݷ ID; iDRf'({ݐ\ 2xIBTD`ƩJٙ<F1?`"G+h)L=-Kq?utH,;^ymH궞Bj9KAg3?8 Ch%a|jo%4I9H7;|+&?dz+vjc%I#=XG0ý܎k?4VLDrǞGʁFé09q ́)LM~ ΗSR{_/b;?}b0/?F,Kg F.fp~fRc1q;a^ӝg`q>iӤRYukoW0q8 %Ɣti4wvo͙ ʣk[Zs3njMxep9N`<'.(] Zkbg_BّYq@0<3(4օ!.%KTIҸ^+R1!\ ^YPbYNR+UR^hU*`"ib31QSYb\-Ta85_;n;,#6L}J>;2rM?nVV9,\\0>-!kCOG1-#Hhq+Ee\"&޸3TYAVeYlyg]}Dwu),G1:[8-a i^.Ծ os\))tL\W?W`\%u*5ܝ9s^:g j`u7(׭-糼.wA Y.zm&h͚T0SAigcr^E q)$d%Rp 3*::j$1Vk1`R(F!3Z$i34 cNi}߂Sފ:YP"ѤzHH }kS78!#i0D1#JT l;2BާJAv1|Gv+f#}}3/V nak1Hvb,\L6DO6,k:1Ml`b[~&)_s.-tي_nҷҊHٓ y- $3ĚJfG{|^)t2yĺt^| 9a]wݼ]̂woehj繙K»nf%^7!ƽTzw֮F^$Xμ{Q8FOƖ[yۉٟ.?g-ИџKs[v&b…O blqd U (Qi)d~#TBũ%&RњO̻7kr7܈9|YP\' Iҿ8煿<[N[Gsl9M`ޮTu.w[%3qV& |Ed:=~\\TC_oorV~VhpnZ7BY/L7ߢx82K]̼vY'KV]pAFFf>/g#|iK@lxXrr+ΦE,Uy>ݏ7܁Rۖs $݇M@}mF/Zԅ{s13pJ@9xk̻N7wi,x҆[pMkFVVMye dtߌIexQs}U4ju@XjTad #8SJ<sSuν`禠5C;v)=7'] ̱1p2ggg٬נ+"o׿ͼT1dKJ<턍Dn8ZunIQY_:hG0?݉2"oj{G+7RTߞS3o^{O&j2Ns,qp"g|,GI5 ǟ?Lb8l>-X+?ʊ*)Ai⒌DQŴr3Ic-b]ZDrgcKEt^F@\,m:|r>W?>p"f7vxu= :/-*p8o~9xǭ8,z7_n 7Īow-VM来 nNMyt3EN38Fx nt:VJaRZAXaf >Q > " k&ּMM@tE*ăQUUWmR Pz\T@WXmL$Ei98!Ga0dF#p< |*˵R>'"z'O{pT%V7TҖ}#$;@qT;1kԄ6wBdӞtiXnύ Pw 5_&eEB%<^I ?>c5|**&_j]ŵZ ~ H0iUԀBy#fF'=8rrhKER+*+=IUypU 0Kse]cvSl=ޮ mZ Lm[,ؕˀρko$?)Ql2[WPFj~`3fufT6s^Lv(?1D5D Ft`DhX>3z7_x[Í6jk&UL]Z.+}2qSmؒ4%הK4cW3^|dKlA|I9Z6YXiS+&\hҌ8؅۸9ו2ST`-ѥ p):(,W5x;45HMqC|N-/UIj4@7 ǣ036'3;o r$fZ&i#9 4#bQQS*M> mˮ`4? [ 8Y㎓]p>' ώ#23R$G"0!tͯ[KuT:KXl~Sm<*"8i+]>@ pT[3g7Ko֏g`{y<pH{"Q 4 rg4[mF77oMte%LiKd ϓՃaB9c|Sj'0I6"$(%tpȏ?y0O~08cܦMz UFlA98Jp ~ަuK/OP!] O7<IcdIe!99!w>NO1S|7Q^ZIQTav;A!(x_ѿtu%Ak ?J (QXn*ՙRut(+;<2PIY<7a³5䴔2}oA?"pԏ#~oC.ٙm=قx7dK*–;^FFk0am 5RSN#^[buGjvWq'Ԍ8N.|U|y\vvnT>fWJfԼ'T W>σ!u5A:Qa5(ܕtAF]+mJp0HDLUXMm=& ղnƂ{Ii5Q :0#Hɨ%xC#yjitey |*mѽrG.1؏}.+W*FO#f`NxƂ" F'@E]5b\^t7ޮK< |bE'k1('JxeH-LK 8fQS[vnit G\b,UԆD#su p} }y dFA#r;_CH6lC-juDOdMx+ |5.X3PflS?dGLmbi-q`RQow9V[!0 nTfX0jEH_'"i7SWމAaY~ T|QW| Bs˛BZb8>mos !'5n >iQښ:]-X\/*KieU $ODOsnmcEBPFg[74$S$C ĺ\ Uv>oWwsd?U ,|V3vRxdx~uL?Tֺ2 -=sxd9.(3Tysfi>^>3팩KB'3 ~Ul9nUƌE^#_pDJ_Q] \+8jƇǨ)C!s$9Ts]iLf*Bp X\ꯟ×1S#ޫ$%8\9%H$j&hy=θCȍ;T"[40RmP8n/}E}>c'2UP$Ib^b?WU>gw PgE-‰K,7GNx-\ TQC ma^iS|x ]Qޒh]b5"(iՔe\?~0'9We^>i~<^ǣvh,pJ4kF![bK J$-pꓪ@ LjE`Mř/Z[(Ht6'>wC:W.u7r~f\}ϟ=xwW5k\JݘKO/7 ]w 6bE(ѩ X_L2_,z'+yC, f꼌kO+iFKQSYLovɱ/>xSWO[r 3M8 ,x@. =~\x>ts>P> QyM ߕD4]=lEBYE}_%#0r %"j02@\ԨEϭcL뒁Y?ɢvƫY>fX!K7N t.k N֌qojrk"X7COe=ǏB5gnZ҂ی4|]Y}3rc7t#ߤm@1<- pA{Nq:RBrBt0Ey |*S|q'_}i?{?/ɾAUgu y* |Yrsg88GȍsTkk _o͈ZkN `0kj\2T=o p(,1MG͚-*_5/_} JEB"-0A]fϤNqLY[%P7}\o8dZT65J;KƊ6K,9dY 3e4? NWJ!Y ?iTI9Z6/X1Ԋ n#g(c*|*S)}3@cC|t=H#"Ern=]$ Iu;==3W,Rdv=vEUŪb}([^2d^$GUjQGQ:\tQ3,,rW > N!]!1M{?vд~-UA G!z`C/{|%MyzwGr\DәKu6+%L׳xVx!u0 &S <~D QvLhݫ^^-8R&%ǓUHǀ($bLx _O&|5+7 J&J6+ik&4q=]t$3-8lH.@Y8 hH* 鵜vHVŤpj ƍPΉ . ˸4%1ϬnlF J_?==tpJ:pyGt0W!*Ld=:FUj1WE,P>ϋiZirxCtzLsJ]JMS형h@![v(Ѕׂ2kc$ SF>w궈Xl⤣[ӦaJ G<_WQגecK%QCW1!Q'6ËI"k1?>ut i1"R &ǐCnMǔ='.Ђc2KҡO4gf/M0]g;3^I[| mSI!ߒ&ù( vL#s\a҈Zp 8HʈWŌ'&xRxS=b˴!p})ݮ[qLic#Uwws&ξߟaX>)eDz0Bq.| >Ma24 qcS:?ƁO ~-Ʌ`=vRZp YSrPQ9]2d\RE9iGӚ?yk۽A6ǟ2GcoAp.Z h Is͕eL%GWu&L)rAEM_'G-f2E:l?WߗDE5`(JXc,qPI{yF :([0/*< pyBw b zzQ3c(kI7BRˉ/RA El^YҦ7(+I {"ia,v>=,zE^';PzXI@qYb ]xVHNr$Hީ&b^2tcy*: C`:[م9Zp $d;W9Qwɤo1z19w&:3-8\mF,spdip:j|ǃYz3QMݙfx(d3lѡtH]ShuQP`%VyF;Ġ;I#0+7.oގ9`*KR2MvED]>KX%TA5? CֺuvJJ<LkR0rY."Zpm*'+$J<`=,7E:^S?_?޽žٲ k)ߧxeR!K?B )ehcWƄʹԖPO ܦ:*sI7}dҡ>y HH9H%[]AX)eK҂aI>|rգh1)]oxuZNPU}tҙ{v:Fb$_\ӱ]JObTF _⭓sW.f_|Jb*ěNXv2,卷_bmBON:-zs5H(j(i∖Bވ׌AZp4ja=B$N|RԴ;BU,fD)JD6($$/{}.:=Wki:m9Ɛ= {5-'Mf'=R Jf3&-2%AR' e.Kck"^t'JLbC5nb<~Z.d2{Ocu@kDZ7PQu4ZL#W$!4fD% ciRE2TA 5].rVT9yL}Y~yIʩ "+fm*c (%ǥ wCj@RօΡhӦ9[AĿPEm6=Iz2zdkߥb̺4q0>Ԇ&f.huL%no~f7PH.k* 6&*E. s[dՃ##T]vKȚǏLy{gCem8*9qK'j+L}@OYu(#bLI4ɔ]2.+Y3Gqn` ԢZbVs ݂#2qXzhHhH^;(-X&yh6)X:dgx0SqZy#2qnj嗧7B-8"' Ϙ FqxAnqD& gx|bν҆$2ynu䢖Ә#D\ wPĤ4IG9mׅ fj"PCPFl;S-*eJ/}ST禴;|~V LÄ?Y? p6|nFޏ/K_?iEXn P繯8s?y>29do7rLa@3 d=`ۏ{ o~w5mƍ/41򗅙.׿2~F٭ /|L3aʫu }5]!o jr6mʊ|ߞe+Ӵ`lWNWsx{A'ο9y_р=Kze.u vyEA/RbBApzd}50y_<?aY Mb</a\=]M&|я*J_]r?B!g7z}Q p!,tލFOײi5 JgYOi!6lo늫c,EXA}nO(56o? SߜJ\ 'b5;8w{krLxۃ޶{yZ,/2ũ ~]qn]yڎ7o] k/?.WRMc{pӭd h!SS3^F3 S_xa=L`Ѷjh=wUޫaF?Ζe5W)h? {_KUMa G!臰QUydv5i><?݄ձӃZ~f=lrwuTO@ayFrS9EuW:w~sh»я5|;ڌcW3YU\Bb-e.Ϭu;n"nf4hEsT變UT!uB+LtQ)I3A3e0#P[aƯ؇nc!ضy[L . ʲ<ę$#V\eG-ZveC8peéPШ~HZ35% JbQ8GR IR)-4kn?K2b* {@nq*T) *GDNkhj\aJoi**fĊg(6Q//-Gh-Pm:޿j($*}$/e-1*.rr3IYȍ:#,"ej@R4M04y9ީ3|{ufwᨮb~Wr5?WFbn"w(Vl=H-tI՜-?1+~2OPi_*Kq pgbR\e̫ 3Z*pBzfٟD5GrfO(6>pA0Ey^~&n)d\}ز 3n.pV7m/T .0xB<苤P ]V\lZ1ܪ`ͤ{ _8puA+E_ᆤ2JwlMhp(_Q0L&IMQU[w;=l{WtoMʘ苑EL=׻܄l-:SF|?O%*R\+l7=.1>|z͔IJ<J$TFb%OIOR~Qz`1yRAlGZnw_/GO5G:yj0f:4KhS&%@cHd^{$e) (Me}E)^H](e?lqe4˜;D'F%"G%brwgO7Jd>[>#ޅHEŐ.3A3;/*0;F;(,1 )f8;^A{ ;.GxaRܲb 6p1Ӻp0ۘmIo<{nPzc6~Bɾ P].Ō2Mb1*C M[V~aHc"m#d6W}Ř޺"ϡugaYڟ:._y}[Ӹ+͆z2vsfHn A}ӷdw{GC>iG3/>Kռ(\ ƄK<'!ȒpQsEuS+%טqx2K{Ѽ>O%€k3:*b4QA}O"G~2 }</~2cj{ȑ_C/"]r`@6ٶƲH8 ݖWVBv@jEŪ2&&ـ..ﮒ цzy _{ Bj6lajw< ۞|ujǣ@|zWwWƛ *J\;1 6n#PBP N/ˣP/uƓ;S6 a {9J0fHݗdP/Z%8Ua6l0ȸPX_"$6 i zL}bBcM /?m )ĩmGJX *7rqwkpы5ـ|r813i)@Fz0S'K\SZo)ħy7ۤ` U܄4LuW:*5&dC+s6_M*Ն>!:#&1E[3V6ܩ=Y ))8U HP)1UD~l#Q1;?&-I5 H ~SOBN\ k4xgV60|n)| 2 X]?OzY|Ff4b5_.H E Ê4ɉA|2NhC=b;H4'iPWC=(R θ<:͌)5tZV [ȯ-3鬿|ύk:{v6zY2[nR ճ/ELlI6f f uc Rr%J0Twf{Y"PW%},Qͧc4IX!o/BQ{ZL!sH+f-`Y+L ,F pۺLX+p)uNTr/9'v,YWoezGC :MvQM#1 ǟc?jpfgUwVL]|ɴ40%;'!DG\E.gfֳi͵Ae 'A%(@WpTh7ƹW jAiG3棰7 ho4PL%#qǻ7L=D?+{Ct~1[gz+g2x:K÷\;g)% {N=4T:H9 2RbZo_pC@ӛ;G"j8BcΤ=TܜXIVvw+K]sV'O''eqmdRi+3€!I,hVuaBOua .9nARmLB'aY nˎ_t[H9PnS:MMh[Y$~.(G? !^As=|PMS5fV.i|n0Z ĺN&>/OtYԓb0իpP7ja58d砽#cX|jq}$| 4?86FM b~?6&evM2+Xh\l 0v\]9%I47 ~:{ulQ@2g!$-ZWH_1lotN4Q vR0{j^B*@HJF b`@ ,Y]Pjbq˻஦5kS=+wq73Sf=> RF*H -LQRv6O? uvlaz4 pu k.))fS= LSB̛]eQ~$?z46$at>[\ h޶ыabT.ÄYR7qG%7q5 stc'Mضs[0( i!>__6K[5VM!NtCyrl2!$1D&HG;JA-ۋ\vꌨ`RD)T:Wh帱rf>y5TʄTeװDzd` ҰU|~,g8q aE,*_0£_B&4(ӤUG~)1`,0j Fãt|$J6DE5 >Q<8fH_LnVcŤt lɵ*TpW;6J$)ۘn+;&읰!r( %M͘="D΁ź6xc|8:GW]ۛ ~Qlu hv>PueII}VK%RC=o;} @9O Da٪٘srpJfWۙ@X lô_!m֝uЫaifj߿ބB6v?,qQXmQI,J‘CR*}!~6--[~TRw_z;nt'˟dQ?=~&˻f z6~gm+ow_*;~a#E6ZїU7Y29KqAh~x Xb>BӋs&%J3b,<-koޗVky+&Rd?Z2`ΏΧ }g_R!/ \o9VLQ' c0eB"'!h)puyq4kb;s+9Mǣ⾣`Wm̗s𢂹բQ+)׿Tim~ЏaWw[[]9U= xSBXLoE3Y7esb;I]Yۛ?Bb7{f-,  ̥;ee˒cLh`BS!Y ƙӦQA@Q6W`ϏVᵲJ9R*! l*‘@KJCA/)$`YyEJc:P-U. u ͿZ`8a+vѥ-QQ2k]W0Vm)❡k ( 3%P y$vp6{0fݷ)mv2[cf(/w+XcW\-r ZP=jv}ZUBH˪lY)Y {}D͔TH#*0dM^jevD730ɛt땝ñ; χOe|QŎ/!ab&3;rjHK2\w+>iHBXBq*;aFcc͎>^2Sh=Pʔ1(>[rv-#S8J8 FQa)cPB÷8N>VƆw8` +!i811g3R|qv5x'x'89ծf#l r~V2?L'VĂ񳬈ŸYwkPh'}x%H*i3QBG|`(EJ1(11ocA!FogPԯjP<P;&}cۂ 0O,X6U'b i-<\׸I+Us<_m^kNȍ0&b4H vA%>:\wC,(^v[y N U׹/7@N*cIG씖Ŕ:V!n$Ew-on{0)gtÙ֣޳/)9aWf/S[vhq?EtuICҦ02ww]K+A1JDögUfP?SQϦI8g^ZIu7l/>4^y;=qDfqGIqp +/9,Y*a2OhCL词L!NhF/URe\IxBdw&u;Ͻ nXЌx픅MqA !Bq%ۚe)ħJ\udRiLz20D lC!%9N~9{;۷R/okoSw۷_SKpTԷ,KOo_Nkk,!{l8DFhaBUY0=VWR9I戗 >`_Q!^0v#r3xt:[ A&~yi^1|zxqx~\amrV66q WBݔ.ʤ}|ْ 1 }C㱬kMR B!Z"ٲmWAۇm'Z =^I;"46 4fLk> S@6kp8_09 xSj~}F 'V;Q]o8WeCRH(b3iAATc4~?dJ(;EQ M,||l+M?E+X-u231?69g&SV5k8#Ov~g6”o^&B S"s2{h| y :x6]ݹ{_9v/c y9%XK(18a08)ɍP|_ɥͬ9DR?[U3+U F7%=} EPluˆ)3-0H@,-ooF#&]}2.DJB-Ј+I@?fNk5#<USU]ώ|mkJN&!V߅;ШRߘJ` d'H+ !a,? Y܆7h>k"[՚YkAG?4INdu/%v%c'kG`y2{5cItBHA0)Ajؤ1/УBɀ0։.˝fЉC8^]Fns<~`Pc@5@3,/441ɁZ6%ƹdl7?knANH,tRO>w9\cÐ-@ (cۋ u 1V7=F@_8 >uAԘ TQ2+Q0اɃBuEFI1H".@$˸2ghݰ8Hî#̳x`8Yx5T8Jl=S4/BJR#[1V9"&A)tQ|5[+(NۦF`p::4Mcx ),ZȈ 1΁>sM#08's;p$Tm*4u**.кSר<]h.AmIa+;KN'F)z2k79UfT!88j9%p0DRh4NAޏ!k}z`@G̶Np&݆c0 &\g#2kD,pbh?0( d20H|_t#b\8[ۡ9b:D%g<1>:P|p9ořS[!&^ݵM^'– uGyD+0^"fv%D1#: AfS'C#08I|ԟH lMJڢkD $H\b%xR.91Ikn s=tc@XXaiAr馹Zx&e!R(-u p25k80ÍtKXn,{j2[t&){Af)+`(C¤nIZs&8 ܿJiu.8ĿDˠJГt(M1Ev|1x pvS!VO7=*.[p*V !@k\:j8ai1 ȅ\`ngeDiQ1mjh&PeKUrඦF`p(}G9FQqS_F"I4JR<zhy|4VAm70⣏񍦚qNsr +QmP$F1:" 8ĺѺO 58$'DF&49?Љ_)vKjR(NGOV68.a6 J+QpS}p]tJNʯ)JR`T(#IJoD*q2'k=hGZwC6Îarc41H(꟡~[ ;99{h'>6Ej V$a, i!`,a6pX?i3 Mn?0.nR wGtw 0UNBhNQ_\z'D#$[XWXߏ&88<_&(3n& ?'P)*F`p,֚۲̆tL 'ڌ Jx_ts5@Wvc6XimpT IN?e;M#/o >xG!~CX lޏSf$@szb n{kT`$:@UR J8NMb$p=c td8hqRdux^zypCz,PܦbaӨxYMpi"'&*d@ިxpv"Rr\;FXpNP{C#08ĿJ.9=4AlUrI*"D"dRbG_7w /\$a9:AVs qzNSkF\zPK NxH;> p[ɢ(R)LEqӌ'xhwnb7k{u``8!aab!coNʒ[NQ6Qwp,/M۹<(qīu\Ykֵ$x1_WLET-FCu=jTMUldEqKFA*e2Q %9΀CwSkвm}L+go }c)l{a| UדFyy1p_&u|SB;XҢ6#P,aI4({m`(erQ4G_Eg6\ƳTOIbUZau)8_hЪ6AO fpTNi"2eE!_i⷟n.:Sչer?Pi,8=*?S9eL~i-o[M-(?ڔ@ۗݼU?,̆sQ jkj<4F4pKנ_,K_NWt~iF|p@^U͓m3 z|>jZ(նNy)_Jj9\F>6eZ5y峳p,J_'ڬɻ1]@M4䦍K`Y f }5H [g莽ẬU]+ʓKC2b) 1˴LA_D"Ό`~˳4 :9 蔃 U6i:ͦ;vmvÑv k3i ToX4-&Ir:=8MA ~[N51uh: _YT{(WH_S8(6pAO˙٩o柳~֝~8}3YO/.v]\v>m 'v({ڝ{ZAoߊ]+*nSEb_ lBA]1N0rK,ɝx3`bͫ- 폻ht=JH3pgp[uRTW=<{ k m8, V|z;ۯI#j\=YV~-zYΪظ_񲵊Fm$Qz&J_`5FeUD)D0Lt^`;ծǓuzDE ^6$Ϙ)7ɕ/ \d+q2f90 7{f~G0)0SiZq$':֌8K5tQΊXKb8M .' Q|ʈ1ZڟdMj׍ku}W-lZ]A w|15 TSTZD]u,vhll K'VCRN?MiP(-; A€ 92I뮫V6 wo{NOw~^TεC{~>bw_Ԗ++;/hVuAg vQ1,e& ۽쇾g9Xyh~~'!6G|ߏ#Z =]Te2ɬ]T]H a4ƷY˫j䇽ʾ;^_fY۷N㴟5ɩd"j4-"d>hgV&amT`ڛGP*ldhZLy!h-Rm*+V]UEu<05*R;{0Z*7g286*fIZ0QӤcԽs:EMF :FmȴqEq"zԬ4O )7Uu:Ih4 cĚDji{\/,3+$"FzHSL-^rJ} !ܓ D>c3N3ᙝ-ӯDh#V?!{F](yd1hSiYRX'IvL:0 -J9MZomXE񮀵Ck>84iȀgm' ʚ/订ݍbF^*"uќ401ƄULl^vXNR&Dhrȝ &0s:x.ywkW V=Vw/!ja%a1-7R4]760v%SdHW=$F/;rpL Ρ'`-] bf :#/3(z& H&zZ5輪`>xmBȃ"Nk"COXVXkp`DPhc)el5]Jr Q _@1 0 5B50 hHH]f}J`@;뒄 X?T赫m#>ś֢@[qgQ(L%#<qVp*,]A-l1EO zI{!1?(hANr|d$NrOQS(0qK:kW̤Q( `~;Dq:XoCn֜M5Qr7vi"] ;fQf5$7|$4]@MWKjAHҚ$WoRz!wQrt ,0@5寎zA迬Y|qq'kGEҀnIA ѹ8dcR. jtf07Cɫ &%(__;sxȑx)']|QLWoy5=W,\9gV`NADz^_)4Nɘh ^!k"5닮%g2ۖMOcs3sǵq^=`ܴtǷI|۴ۦջ~/uz^gH$ͨ6L6)7a:+ďo~="N{K[V%Ks#YIRnagɮۼke3¹L@^)/gٓ{JsVfclfCo[-SL{* Rផb]@/+G11/֘Xo]TW4s.[ʆdJ9Ӥ Rjz9觶~ֶ bsy9ZeDF0卼O2p\[tɽ{cyYz{ǹ=NZN- =ޠmu_:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:lö:ll:$kա85:hRx[D|:zfչ߯Pep%|;r`XxSZE#ivA)5}TLӇOaoV矩At6˟JFڮzKp}#{%Ԟy;Qmt$JY|X-KYd&4,Mfi2KYd&4,Mfi2KYd&4,Mfi2KYd&4,Mfi2KYd&4,Mfi2KYd&4/V ]o"52X"?)Mb aCn! ӱ$="/OW7Ǔ̏e~&t~n?K|pGoG.ѡe3\o.i\O?$jw)/T,Rj^I\d2z=.2׋W"h|.2kaEkt.(@ sg3m2%A@ "9e5Ϯl"DN$NqvD{1sMYk#tD; gb[>61ۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆmۆm~[op1R?>`!K![0p|^â5y҆%LL<3*YBJ)*_P v:_`hXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a u_wgfX[|~E'A ;~uo,ϥ_לm{:%U8whJ&%Z"eV:H<3;;s\o(Oar,I7;0'B]؀~- ׶feۊli9Sof͒󋰛 t9lH{ltк;_:y =oh4zoC1~Hc Cy0P,ѡ@H-OE*r|P,! ΐ2܄LhTZN:ȡ``S)IHQ'ㅴZxU͕3,Lq)1d+hM9ڊblrd:Alv_csVSRO2T0l ͝H Rll;RՇ]ȣ $7A\djdk LF",pWӔ0_;j"{{U0h4'E, b< _yt|=Wk2$#(`$; 0N_*]ؕaqZG%Eqvїʎ'752Mf|9 uOt1n W(FQvX φ̬tUR:.<Ƥ(ꠤ\Ons D/c eD*;v1dBdTCq=t;F{g"t6v"aL`'%Lˀfy8*c2se ^jn+.A5Z:\EԂ C?k>(4on.PX𮇵.;ڛ἟+.3MUX,K fW(,>4U^q =/OOR<[*WE+2cS)4y}-!JBĺ CJ<d1Fpm9JzJmYU7j0o*k֙'+$`fHk|ag8gGU~+D)}zO໰aW^=P`L(]~X|aSxpRB5d? "܉93z?Jzf4ÔljʘDi5{&89?D' eW[K;Y{mM׾YZm00mÍ#|XV=]aC+U]utXLÔJ*f0_-V rWFv2p8``af"[ Q#+UqăITkc=eTjG|^C0c49af]#Xrr-X|dr6Ce>ﮠ`١n V #%zcQ)S;nw] Gx8mOf%#nҘKL@N4Ψ Oi#EOf:9f#vfwKv:hmWO>Hmy<'`ʽG  7F : @cPmEJ `:Er%9NZSOjO/1 V 49$:Ν'J 9Zb hUsk"[cPX&^ZE 0c4 FeK̥& BSAEVP,ل\8k&>!]Sit&(BAXsF+̴{/۱@()OGR S9qƆFjB5tkp9k@V iXY0Bbgg| B]u| q@Q.+=iP‡v$W@BxIru {2& `\#,3@zg>0FRTJkAB zbB`(cOHbYx} Y5?JA[B3xP0[ŔwwԵ*|ER-3R!ۊZ),4h+$@=֎~3ZJˀb`VӍ]iIv ` cTO._EV(#Baղ)qFoY\_G roeMě,fE9šjE>,qα:Q0Wʩ_~xuvu1BH J:.6wQ\Vɓ떥qiT̮R,s8^\ }mY} W욐6`h`փof3wݓjYGl4G~Cys7M|qPQiz68A/>}|h{CԸ$$!32>a x?us(~We^K::6|xq[RHi鷿tSԩaQS8AxIj#yX{:|]™dk wXh "8('#FRh5YW*;3/a}G8iP,Yjϣ(U@/_&g>}]X(E:$BQ}eÕVm,EiQoI+N]9_ᦔF1$M'C߁I ֩GJNIj@:Gyds}٣=/}mb/Bi](mPGSx(OyAJ gHr<B|*)! jF(q [eg]"2X*gTMYz'> `Ra<Қg`O/s6=$܈r\e}\ ![[vC+67=6㭬)?kMZN]tlN'Ra(VH-Y~6kSεnv6wl"+oRY5w,;rS63vw(Jqt(X+IȧSL<e޵mdBK=|:MmQ4)k"DTIΥ Iɔ-ڲIږCqlRyx̹9QG"`"RSFDD b FрG!eLD3'm |/doCg =eGw܁F1ZleF͝QFGRAtt[e̞3zli{eoG]/F|HCRؙ ϊ>zbh aR.A%\MX6 mV oRm w&,w,N'g'Zi\[1Ljf@Y6.ˏACk2h[aVշUFn+OߗZbozr$flfo^;%W ܜvl oAlt/@^C}c|_߽+n{}o6vvR4 %zxYignnJ꾝eFDi[vM5 90 T.`&Dti,`RC*ҰMv3-qzл5睿!{!S<@w0ރkwj:Rjsf@hcN&zGC)tO u赵s/l7v=_l"KSdI`N;WymJFx@'ת/G`**g 3ve8Ʀ*P]~t.d9N |Bι`1Xzue^L] :MUa`b\|ge ~Uj+j?ݽ$`K*#a(򹖜N)bxЄނy)e{j='1̏6ˎm]y#7&g֚\!r'5Jci;kiu۶+qW }e۬^kf}qQ> Trf" =G.EcC!@,HM=xpMKKTVJlB OJ;f2Rʃp Z^jF5Kׯ|O, Rya)t|S(ˆKzBSn}qMOtGM ^9lG _g8\Eǜ.R1&Xn6^DQ(Her aoXs5^ /RKueíX^G*#R"%@TD2'Tfi0Ͱ ,  Aq> n ƭLQY0a9Ǐk^;$9:X \55>2u)LX,h%?2nixK-(q*Υ i2Fb'`#q0YDΠa!Bڴ_zkܲ-y/8%o :Z>ԮUN~}ġNjP™=}!2Eْ_}Xfυ.0 ϲ*e>^bf\8y˯՝W8A ~Bhyw%HEC$I'D0\{ cbb sv:WÀ!5iatH\-_ #wZ5F8M\&mŻ5:?NM^$Ɏ>B }z^!gJaRCӧ`NM§E.6k8R a]1YB_ ōՅ(lM̙2[/&?/ 3Y; $o#5 CѢq:dZ(@0Dt2zq1lfrQ ~$FmdB,R?Hk:/" _1K#|ipI]P<[Ѥwo|sWWoN1Q'<9}}"uDBMVݕ_[petUWCxb; He\ƽ|0Gn wO_^LMד,I)b|T$'>?aUL&-3_:&thl,m)F"JG@y54B=(4"'^)XJ 3QK- 6Ό`,OQiIR*JS`艓Cd",g-ٷ3;3ذVz=~%ZVKm3U/V?hG*,ܘ{i)wJ2>Lj/WJ]ty&0OR*%aﲃ_}zS"{/=AAc>g,ޜGb~ g6k\+?OA h%#`K*å ;޲ʌ>d68RW)3ОUdDϪPd m0${VhdϖO g+2{sg>sE[Rd:|?;|=(1a3TTnCuj_JrQDbV:+(iqNdi]*c =1վLFwNӭti:zZ?Z=G,Ff'TMq+9ba&h8R<=Ma yô8+7!V5k)P [cŠ3o\iܛa>zBΕ-+ځD~¢bR9c6([8SSu˃<;x[ե]6p%(})X݁U(JVS(PI`x/D:Пcaj)M(C֘Mf"MfǞdr34Кu(wr|("0<|8A\ H94`EG\yGO59`{iT_8J OHe3d3JoYI}58z 42{V˩G!=]_!°-jvݹgbui z #7Q&QDm ;Q"QJCxF)FـoA~?/rbqP9̻A]⽹AF.~(A奼▱{Q+c`0)<NcʠJX{͔Oh`kzo` O=cF@I>3# $bNVF? IqO 9yO~}JX_ɘh 37?t zR;|'B.wHdTwi06BW!ƺ)_]:$d1̺nh?%HClsP=T΂#".b UPF- d#S| &0Vb.B;HfORj.L4‡oӳljS; GpfF㱙͗x.?Mm^gT8)̬i\ 2w&mE {دyd gǭfWL_mC7^ߪc:N! `N?GvWe=L>^E+ZQd[Kyk ٩ #"Q}EI'Z")uVQ >A o\.-w4e;_stdc.w賖քGc \U7e`B],)͵t9h\=uf_| *1?|e=yͧP1;fCZRvmPz{);3)#I%0'rUVo1$1sb2u.avq{v\KItArq#qYs*,gcm2S,`IL#sf2oy Vf㐱(xb7ô^^Y ^)/âr%m])WlϞekD+ `\#,3@zgQTJkAB ļ1\c/HǦ.B=ֵDl;sh2'+1pHYɱuJQ@Y(L@ثMI '3c-."}%$cD91@C bÙ#T^=SÒًK ?nwV*e)(mS?iSy})!GGMv(?כSA-x+=eY\g=%&`j%8Qq 2>҆ VJE;4{*=/Ccb&)Ąpr&"1ň3L3%π<3*Q4aXш vH"r\bB|ďv"Rw[V/s{X78N@ b5 !JSՎa`8  )cҠSęCSiD/$F ì:!,F p D}OC1ˣ[7vYxݮğCm3iGEwiK'Pm4(QłQ%N^"}ыKj=AH*R].z$ ##qD>Rb hfi( a q`XeQ &z6^ ̮"$Mp#] >\7"n>~9.X_}~0Wm 8-KMzB~cy$KX_`0g.\A N;]݃[bc1sf3wGOWy0wWn$oq3I,oVH)7 }X} UM"7?/󏉲XeW4W!ɩ(W۷n PPA\GV0e14hjSW| [,]bS {<^݆u?g|Lx0mYLPeX̝c8헌X.i[( 6aT>|QqGnP53}iP4YYi)(uIjb4GOg9wcWΊY]9K3qWHdj` FLʚK4C5l8ξ n⟟|yݧ͇p%P`L QV-d[=bSׇZ45u Rcj|uMrżCr@;45U ?~x~;e&w1Fb okS0:]\n+c!U}}U%̝MM>{b"_㌱M[=(-p^A X 0br@8 Q|iN-Bk$X奃?(ơyI>[s;7Rx'&ChQ)5P^`M-24ؙ[ veCc2rj_5= N޹K/V>/9GW%]TUAZV6^]XrP+e@H Nv<F2 jȃ}YC3]`+Z6o=M_\KFRߙ?! 3aԆBl*K=+tu 3aQ1w7A^gjyLbuf;5 [o6ܸRP:p"Uh 7'e+эޏg3r &#WJN>jZ9mN31h0Nv`{Xc}𐯵ņ(9I@(Є &B%|Ĩ:sme)zFr,?~͗OwY 8!_¾"_6F4_<1_m#i>mGۆӜtH) pfL,]"Ge햎5 :NEK 9QS?ט՚Ӝ$'&"(k׫\jCdN|u9 :tmZBBmboxX; h'+͕#S-~='R#p"+dA^4SNņ @F j΁0Mq6i >)ܫ[D8;O+SՐw{Feyd6?Jc騞7 ZAUxa=(3ʭY(, +)Q>ha#v)C]5knYE\ErfBjo6 ,'a}T5(̸aDMDZUd޸*ki+$a3$;:`rHZS LlW ۸buױ 1) :J0q-'GEVAs r1Ec!1H҄U\ZkZimjZ5Xmktbҋux "X+?GlMP֌+5[~7(M2/ -$N:Jfh"e6 Qw!x.g1aLR9 <{.'0fPJ!HkGq=Ae^0WΗ΁M{ۇ+w,55zݘӹ=So;z=ٞ!qGo.^37$m*./-?źhSŞK?R0RLt%xJ,QO'y}G.A[ N'c(uc%&y$C vPX Jy \We # `{۷+Uo۝aKKT-כ7pnD #T #F Z!אg F^ jBo+Ⱥߩȡ\+ɱkD:]z0QWT@~cMzb2{ܢby>\;YZWgV^)Nˋ^'6dyuc麻WNJ--iy/pIv&l\!J|{AVn th-RD;I$c߻x 5Ϝomiw jϴOm3 vx*UZʛvDd7q{iRsbWYE4srV׻7rrI^Ԭ8 ]4p7 2^MXɖvo]+ z;Fn9~q0&ڡͬ8$3_aj\B)#bH2ޏ|`RhNߕ6}3&`b,rT2y/2*EpFxhf)S0CP_sȖ]/t.J{ųkt&ez>(q7;2R' W֎%#9I wL{7;:wD (Aļ"qD3pq9, o?߅}yG5)dtԂh?N)hx+ ZZ&nqQr*K4 ^mЋteA2,o1L,Vo yVoɝAza>G4N mU |Soa+HmCnvcQA6RbQŷA?Ldz  > 6xȺ |0{F̸i<#bcӢ)^RHbk;|oNx-15>XDۻ00F݇3Cw>Ōق0ԞYm7g0Cm:a,%eh7De" @DvZ &47p[6i^藹eږ'6gliE=&li&&mqmSЖ/JfrN g4SpGRu\;LJfi5 T`1;M~$XSkNM5͞_O--]FHK_٬N~zc ã_΄bdǼ휜㇥䬄Ȼh8-ʭ~ڽv,چTe}yګ;̀R΂7nU~b ,K&|4[iɲImzAr(2/쥸Dd/V0Ԇ.0"3P;*G"aWz638jڗ )6Mp&پ`&ݧwbQ'[ w x~Ow mC Zn1m}_˰ؔ>?Hƣ}al!uר&=cC84\4͠%wGY_^W|dby3QBgA=$^'I Ax5w%OӉv (!P>.@vLЊv`9 \k]?'ϪQޟKXǹC!@+B Ts%z%D[‡U8H3#eK̥& (4(*gl%7?~Ͷp}рmP9  ߂KEڥ"=l*Rk ""C!hc# L #XpZ=Ƅ!Rg&=AI"k/% qJ7Dd%0lilVaM%9 |^h-^osʢqe _>t1;o+SDG-F8|?Pʼ4\ R xj#LW)l_/D[ *])v'2:/m$b<쇩fSG+nPu~?S<`s-:2NEi6X*r_ 2΁ŬP7Ȯ^VIЍ݃T#aOk ZRB!gh tB@cLW ХTk%i@ 2cLN,|[n z-뇋ͳ?ö*oC]g_. @e~=:eⷁ]irn~| _fAWCHz B /a/!u /%B;:_N8iZ~%>_x+t[כtd "[mwP@P-Zz~0>.&$]H9;Wlc(u>g8(`iۓG#!;x()h`o_ OûsL(QmGs+:2,"1rbrX+?B"ɝ wwzASMU*=\j vHk)qK 7:Py P)ƤuPhcj@))BXÇ" a%1P>@YC|{P8)C0!1RVa S띱VcN^&Z& dts55>kAdB^ ]}'~ *ȯo)^:ԭ UW;U&EvL֝\۵2F]~eZ_uA7࿋ 5E(]?/ڬtVfNԼz#mծ#ﭚ7W>Z07}}>mx~5jj7TMFwPkN?-S}eR ,.VJ"^: @5p.VU+xya=<tqj:K) T A ; 9h+%|d˯Fï7ߞr80bQG"`"RSFDD b FрG!eLD5~Ngp1;ըjPl;:\O.25IQӘGIN" 62)6`#*$;A Atȴ+ט'IM+f,~Jj8^ja͐S)j)˫íD̳$L G>@:[! ^/_AlS/7f:)z-f X V(]?nzݝzkA(z&V2,Bwf? @0^ mo6»UZ]T=˖[+gIY6D6bmVNw/=++?T7ONmVZ?w&ף\|OݏYT_A=dAxQ 1^f]Mٰ. ׋??)hڛ=7Ÿp}ʡaMz[8>V0#)6g1sm|̓dw|<i'⸞h|P0vY6̀ ߜ |^sĀ6v]CVm]pvC-˦Ȫ% lalb |"2z¾oW].Dy qB",ҨjƽBH>FiF҉JJs ۾i7Pք\`JkQJ' p>t^~6XɂVN l 21飂/(^pK+؆>VE%7GٻaCF0ήҞ7IT~+Pm >Qˇ_ >Ri_|=-`n22({]j: zT`Wrkf%'VNug'٭Ϩ֔R&e=tql+ &]|Ӡ/B> z<_0OR,jh#U^V:F@|E^{Ãll60P X4#-̀(~ɠIJ٩amwW oک%&7drt n>Fuʙx;+OclIlxjZԲ|;wH'Ѱh|IKEt&ɏ素 Tdn{t~ IM ר]QQ8ҒPNbXXH6q{۾/S{RL@SNq飶ěU`̙ s:R'!zorK?2_wk@tFRZD%u yI"R:"5`I!ցn}FY1 )fe  D9{-!<0GdiŰK'c=# #$$xKFb$S, A9p0I̥mXáe'iuOI{h1̓8|bQ̯ץ}Y-IJ?y"~WWaKq^b:psNPVd !?s =I,"^ҟKtI.Un<` AHC4I.A_ao%J E2hXk"{ b0xIV^d Z23&E1ʼ>(HـeJqv+ӥ=matg^:"j7ɳ=g˜YkrzkӘ{Dx@[{oUnkUX./idlaE p f"$O rF}4Wơ`LP# a޼N_7g^?DW|b]4uVݷ ?Cn5ߧoдjiho4Ulisi9vov f{k!@~^}5?.ݜN} fR5/j3~ b~Y 荳eѫ,J>w%Dh:..BlcՑݬ5^7(!O%Br#QP@t z@4zsORJ 3Q;EOk^^x"QiI&9PCSY\+6[%)P4Qbq\_Ⱦ{w=lh'G [7%* X@i@9( ~:SG*oIC PƄ̀>.s}89/%S#FkuN,.dȓUD;Гl4i2m8A]]ݝ3#|E~I+^\^aZbb ,xQ ǕOYܩx* jOnSr &Փ>}CU԰~*KVRi1Kc/i F߾en\šd㡢g3`}5hJ.4sAβEȧa\PN Rr=cbDsbdevrf耥ڄM}Kgw-U(N=o9^O?G3jWڦv{s]J futI Q<1C @B9@JJa)P#c&ˉd*w.o.OA׻VM&v-j0}|h^~ew2;xbYwBWG&(ޘ(6ִ4QSԂһJb'm@HVn;0 ,$mKLM] 48)q#ӭ&-9H,i0^RN 4c~}D8AO7͆-}>ms`<7$^f6  %Zitmn]b5iEa8nx1Uuvi_վvYԍwEMog._'rg Z{{'q9~$qM*"$2T'K5JE[q=xBOJ2Gȋe4qOr<=kJ՝_5o~ om{=MӚ^~$x7\lmWE"sux2<{s j9XWlߑnB]wQ7~)CL%W %25$2gh:%"xbAs ghRrc<~>em8. `U(~xJ/5lɖ,r$|g2B*SdN/ ?\&zѺD5BPGLXץ?Ř+}@E }ob-!Ӗ7M'lik])GR*|lJ3txPL{-JDrH䐔`X)v9U]n?{0TLY=rJV)O!$5BPPxXҺ X1kU)!>BZcA<,ÚKAQR-8۵A?Ѡ6<9~,_wraSF$"\e[ ÞK!up2)q  k"Ib,@ MRHh+5Μ6;ځeTy2$/V 8isΔl]@$FbsGB4!x)%I9Qj~P|۴| 0;ٔ˵OqAqCKx 9F-S}NjpUooqf72C0v:d F9W9^G9EV5DeK9p1~{2Ʒo}Y;77/f;&`Fh1QiUz\p9bDGe9̦2@Z҈,7y|LD]v]vͱ8Nذrz.ѝZN=3CX;f=zgbGY|lOKlw2pПzYa2I}63OMsaJ^9_bncn )$!NF",ADel su !PҒkx! o6'mR$vㄹspI F%R 69k5`YHW0Mƣ~ٔ_a{Eߺt-;tk]F3⍨u#wz4U}w[ ڮ8zm:E)ԛ֨n]nux[WYݭk}c=/܌hCۋ$7x΃PuKpxpqK@Mwh/tkg.HtWCs>˧U&c 2tla.~0SQ`X"XWB0!ѿ=yGy:zv('[ zf8lU/ko Pk ʋ|^6z*Y~V{-0%^CZyG`WMa1Nq毬 «Sov e{@`/0OA} {'{RՅz7;.O:EEX̩8; 뻳hI{?Y;_> ٗN9;$¶yJ [؊&"x]3,$ǔae߀>t`:b2@?-wB * MjQ`ϾDmt8Z[.'ZE@ fR"jLP#NFLIIn-ZJZ'~x7gSǂiϩ".DXPq*FRiEkk6n$ I"hYyzw)[+uD/!v6{"FƭFI&H\Ҟ-QR%@c[3Gl駶`ŻE~^o:U8>B3e^{JgUs}:v[vZW|(@J+z_Ӳ' @",z^ |o4e "yKL[o{XmJr.ZuJ-;.)K-w˸XdøwUﺭGX˳ɥgmB2&ֱ2`dcH$to y` HS2`k,b0NmCX“'LKad0$6O~L^c\z.BBXz)5k22z(1y.= I!j\XCHab Cɛ5e<)f#+@y@3rwQE%8hau#YAhS1m2z(2y])JP(g[F,=XFgmHw@ZGˀq9sp /kS8W=|4ز${D8zwuUp{>c)`w@aw^PM\{nK|5#Bњ"^p}5Bj -`^yZdਥJhE )kw} TA@mPjKP!o4V\@M:U t B U൑ǝR\3})K&3q4Ze':T]5G%ʪ#tH}(( le!(߅|7KU]ڃMR+aLD$p" .]wCGUD$zSϞOLPIe{@Sy. R>3PlƝ۾S='En!5UС@it,$]fm$נKb(cPyI] 7* 7*)AĝSo%ͷ3_>J@}]P ?,PXjE6 4p =5Q!og:&wNnpOԭsc$и| |7~Nr3wFYjOi:Paahtۗ6i}a:4e%K|?-]\$IRLX Ǝ(jUaKČ d1L%,fKwPytkb40I컐lmWWX$_ܷLo4:+I@/1\.BrJW9)7QlTNKZah"ZǢdY3oZ;V~EW Rz(v0 #1 㴥Щ# +TGd$ZX1ElĄZV9&Tj X:=55h R2Z͉p4#B #FL :SpaM`΄>j _Wk#Μ2V4qfP\م|UCf̬d6-+4(ʳPy$" +\\;ݤõ ~gx" L73X$rԒC(xfg%xN'-.Ks-Ry\-5+w%8OuA";t]:Z7ݼQd '?5ܽx|^ԗ2:ǘK@:x`y+1[ϣK NйsɒC^Z]=ZC˒%zgeEd@#p|tS=tPDmEjl`XC4hSC8WaY*7L)DL1Ѡx&(ԵȄ1HZI*5oXx+1kJ4PlVH>` Ʉ,-fJ<8mഁӞ5=ST,BϬ!rvuN8п&ɉ9WؑBmlAJ4 QK<4C1 XPy+g;d7O%'MJRa:>* )Tq(X6'bD0BP*8ۅ|5gaϘ98ybCp76u;@j P \"%8Ri)r@\F.;> o$N櫲zOeTQ%! bLH{:N>d6'hw95MwԾ-jil3ٮG{]x%5cml+c2{y|68kJ(tz&nXVذ3Իyw';*Wz}^/ZJAՊ+]CjzmڒdR>e[At|7 קxH%3) CŐH@e#}R"OULIGM(}ϡIo:~@rQI&ʳ.fAPnxK clR8w4PRM}!DzK Jҧ @pF%i4:AD|`VlI*FJRN;N]h xyi3+gQiZ}4.m\oF31~9vEY\7iG_'qjᚿCocI>5RNdyu]P^*̰8w!Y͐IsڴkӍ遠MOPx7q&E#f(g3v(?:G'Wpt'(ώĒճF)+P4y}&>]ϸ ~Մ}4hxL۬f.WdytAZ%\WVIGDhfҲ(z \ÿ>·_+NK")Pwk8-_/gmMˋ[& ɜ?<.Vث !Mhjݽ>uNP6#}ax0:FM,oc+XVhtyЋ1g=)*ZGtk$FmU9D͛֌ _yT>͖Y6;r*9&6./_ooG\JEɶ)lɪN& w[- xMW g\YqoTW68ltk@B|xJ>"%T]iM*zGq6٦QB3ry ,6"Ġ q?C-.tnm)#]&Qd} ,$@}!ЖN(7dјr%!JX/qX<ҰϟO1J=1:X2~FH|&͑=T9Yɷ5uj4q-@{o.̓x'|éCwV6˓::UnAbW )o^v1DM(#2aRep,>j$p)%%(Ͷ1V Β n"[ FXM9F K^Y)iѨ%[0s6]= @^[j+# *6ѶINx-a)ϲfJ57 #TQU3C3sW>* o l5jrV T)0b~>9mԒI4%`d989:]UHH|ݭ=rhgM:(L ZiW|\wMYK]*RcفLv%ib(_xDR^y[̸ct6>h/3dtt/a5ag6nt./)Z|2 F]\ƪ|Ҙz0?(:MB-Q ơW`.U >"o廵gWG#V _5~Փ_z~:ZלOb)|_׿TV*O.̣bomGQoz>}d.3k8^i~@Q N.j~BuƔv}d $`|pqa.w0($R.E/xIKf j%Ռ%'yР *z4`#OhpW =C:#{<(A.XzP!*_h<~:\NnCv]rGr_:_Y7uIs<`E-QXEP^<'4O`눆 9Å !]AE \=V2pg2^\+MEC/b(h)Gc-~uxET KcHY0c*)PrHU3պ׳n&طtYwa\Bڍp>Zfw_5lɲoy|4dB,rj-nZ .WqϴgR g^odA;-}O ` izvh kfh>% 5؂#Y\ |tμ3sdz8L7 -_WL-$W{P;EvMN*<am WxzM,fCSr;!AƷ:{vx@RIˆqᆭ\.xNO d>hNw5?{WHr /-)k?=f=_hDʢӚ,")$R6:X/"#8~kgi֔}1@֫vҋ9i3]LjbFxWhn^Ѻ/TZZ}f#7_CgzR *5;g=Nz=kneBڿSEgGYge V9N&zgb>40@RcÝ1p:'qoM9J[HK 6jp 2fkU Q&-+d\8d&1cҾ-J. 1':Rk#9K̡@ ֗amz(|c& +xڥEQ KHI }@nDAx LrYER RE`:U-7]|\XIRML +f "ne6"X@6I,{iٳcZa"p¢1I"K:JzgC;ۿN?r"+d%Io,tLH%D&r;z!rFX*1%lC@ohShBwkULGd^"G 0fL4HhU~Idj\~6f9+)%IJ2F'cVBY, siy9i6G6v%+{ݶ{܅ʮŊ`PݽLg~RzB@Z@"e6^6A h΄ЩTeIz^mxR~Ha.Ӊ;θ,"6NNbb-;h=bĂj- Xќx1WiX#w19wߜ|ҹ%w/*eR Uv4RB2 *IQ`҈0ۏk]rGfO: 4"˧ >b+·տّxf]}#p~d*;0ݭ\pMiŶ @}{͂gKKV/=>l:BY#JѦ={t͹/`h}/vrDb [6sލLf!;G^ȵc|U{7'яv~3Wxz{w~'l$韮px6|Wl-TsS7i3]ׯ HR#5/gߕ1JɇQy<>??}7rߵm^+ûyO33ٸ|F7LdwNXD #?NG^Dcc"seD;b%۔@0븕R[@i65 ыA$ɠ)O"`r (8m5 MW1X:KNV&QYnƆM l$5+R?feA]0(T MlD,8i9.mi-XNM/;c:rdn&DtTN'KrFJq/Fc38RJrAha2)AK@qc = -2ӎ|>Cn]Tds 0M\.k$CiQl2{B`+8S4sDcȂT@8c20gT!kXHjA ͍Ҕ vֆbRW£4E!J tpnA5[?PݚpQ5ZRkb`Xv| LZP0,q\8-<5ytd\o~:_ nmQ8\L<{ԗo.MבgJ/X:ܣoѩ @m6"!S2Cc\K:wY,p|,5vةVJRZ*YK%kd-RPTJR-T"JRZ*YK%kVZ*Y:渐TJRZ*YK%kd-TJ .`J*}(N{+-ǁm[i93pK-~y_>*g2U7!7k BAVq\1.TE1[T}HPQQb$o7ce:'O%HH63pY& kEK52GώgÑfgmHjgqiڿprtw'8Y^ȋ8Mpp:.5 ~;+7SY0 othh/yE{Ը}(܃}S7bFtoӯGtY7R?pP-qHgönWtGiWzݜMx(*{ h{XTN0^HR(IJRd\3d `1 F&s߻~f J1J IS+/'`2įuuSy|T#Ĕl3)Q9VU /+Čg߱82^r[e,WR_ad ntbL61JGJkN)$&n8=&&>ryPc 5PCv5l;>CǮ/PIm[~y${G7~hv\)pT cyyD?cK5o2M$4%S¨8o/I=LuKn\OSǛaG;+$CLYiUsJ Q@Peq9RKG-(2 Ɵ;^m+Z|G@rHU6G>,Aq.gjTJX[Ba=1(7]FZNQ%I8ch^zrsri߃wl]ug5nI+R/bՄ<(c~Nȝy⢚tӋ GÃ8}O\O?}Oh.1V.檧P[p}#nm5o~kia[g}Uq}[vkkk@R~rQa8vy?>;/VOtY5+UQdgӬdgh]P7o!v !?ݸ |>m48-9,QbI2)QgʚI_iVf0Vk;/3/kmcqE㑀@HDf]R&g~E, 0\H%"b-2 Ơ52. 68,mow.(-pe&2`Y_Vh ll9ma4nTߍl/-6TӉbݹ԰XhvyW.Nssonz\0eRY٣f>XKڛf"}i+Ҟ~3_҈?j@r#Waz7qgiFEk^KCX3)\HJh}|g5Y`]>3 c Th,2O 9QSE 2dUuf ԙ1A`@p899G6mCiA38nр#)~,EЁH")ꠈ2K&j-*;>Wj}.KqL*`EnӗLˁg*{tɲ]w a ̾m9[F/[zL"udBQZ/E#z D?m-V%%.MQ_ I&8+?JNU($^E@MXշ>r=8qS5i9=-]11 [)`l=.Etf)2 94t툮5M]<-Ӱ qzO n8^MgGe%JXG9s]^?!cH\$!z x6;м!,+ .x0) 'nF_J u>fi6s500vFQAbf LD< 胑I'@$\pd>PlHkHkH;$z^<{6eo[?>`m#cdDQu @) ΅X|T|`* 8;jF&*0}x@k3f\('B6FaЃ鼀uj[zE>SZIB$R9$-ci #0j3Wg *_v@{P=^UnX (NE樌mfHY&{VXcZp+&TTGP ՐK0mg i> QetPE9`bE 2=f'#-eȉYv7YrMTxVFRMJ`Щ'AqVMqvzD b-O$s"謉IbDΎRneGo#[h@Mܞ/ v/X1[X&@e@*sx@ro^Hf{aF Za6lyR˓0*?.#21YfȌXpQH/gEȔA3U I#UӉ3GFidVA{&,%Q.r>+srN>$ueVlY/n 驀v!F RJn遐'S0 H,?D3z 64cfZkpmҝBn΀s $>冄HML) k>U2H2\s)6b{yD %I0;Q:X%΃ : .عJ踦<~k+m!qB*N5 3.(Q 6.\jR0b2!RRE0vk0MPKKv}~7.V <)K Cd\Knƒ9^G%@?Ue q,ղL:7&DGS(Bb5Ao"t<'4^m!y*QK5&B(xf|6K̘ i`YcL{9h"V`ڳPz^ax;[oL15=#%}S$H1.zG 8{4 ?:qm_&isswK\. \x`Ӷ'9Awsn]\lg8t߷|/ċun⬋5}6Z<W (gEMۊ[pR6py|bs k=m#>y?bx'qkt9N`=$a qi\h _Ͱ+?n1ȓ?~%|EII6\z˪t/PqwZVM!y#SR)CxЂ딸\1:jȭ-=Gpم.FC4ftPE9\9]BTB7` *NG<[P3ݶʳ䖛`!5٤zYQgg~̖[lz,u <:k"s0;:>AA4f F4Sp6ԄP`UfK99%0]Y]rx;$>hgfPZfÖ'<_f8Dt1ΖHҌ"+:. i=lhRT2c23hش0q])b7щ5>w߳G's"Oo wq|TIVˬXAn&Rj*rjL%J9+)BR0O.x#l> <.#a|TQMC^^=Aoű7X4a<BS4v}rӕdG/X.ݸ_ $.冄7H{@=ܼF`0-|_bLKee/+LQ9qt$ ‰ %+K4`ߨX֯zK}tIKP֤ TNֆ(β$2&tL,0accߓcƝ8͌}Prme1qvTH.ꀤ<| aW=yv:9r~Q*IOx6?5 N(8~l@ӟPɕ2ӯoMX8 +]S{T׌[:P%' /_ƣr\}>GHD2[,rzws:xɅځϛW4M,Nאɸ</k$Y0(FNIigٔ1l_џt__*yk*۽ѱ#`i1d;c=}$_n}{S}onfWΘҮ1i0uB_KCݿ; F`)f6ͷœ)O<Bos]ޭw·%Br0_n.bA-R-p}c3vٛ\1u[:*fr9`Z7i7V<]s:U-+AV6Pigjq;UsDՐ]SBk110C+R"%XȲK^Ã' &`9!ZDBg`VjœUcZg%&:UA?^od ~S , /xEDkm 2%R,h= H.E.@ B( *euHAN%v٭e Ul Hy"KcL(%RƸ\ 4҅jPd4SULGd^"b; &K\n=qXT+ ~h5س~'Ϯr?w30Բ|Cj! ?j7_Zz~cz7J%"RruD֭@Y@77di/Z}|OK)=;g\qbSZa"eU0ZswF/*O F!hs|̀,K-glBlFNHZKaF,1 !-mLr^rV td\F?>V$}~!!^g@_v؊ C1Ujyf5Mb>_~(KAiAֆ'xW@9;7AIk&)5"ɗ3쓗f BV["^٧ JzQH^X  :Fx">EfxdV'D'LR+0^_a%ʲ0}8&-ajy+`o0uIOqXg&IgYNz&$׻ }7p/[j 1F[ǘպ`0nWQJ5mKA{+Kai=O8is(ij )g.@k(q8Ns9}[QU w `y7]peo5|WjL szK %ՏSˍ&dQrnzz `Plϲkә t5^.8xf|ܡaЦ/Ɍ5*[AO =QY %J۠س&yKg}xbikrJ9f') [K_n 1xr=`o`04G+,EI|3 Be2$-229bf6; at Ⱜ1XT:LA(b,jFMg-1!]!eX$mp=Kc sFtwڀHF\)D@#\ CmeiuvHN{vh ڳ0y>?] "9MٟajbSuCLV$B%CeVҾ5>$GwP0s>raŷ ~Xr<ք<`;9$BLYiUdNYB^eĜQj2! e2<"pΒ'E.୷A:!WiqcI8;zt~L͔ nr녮$/oJe)f!>( r23T @wػ͆zBȍFnh I58zbU }r{[XT)մ7mu'7"hx@e1+@T)Kn-FzԞT%Ҍԑz;jZG.6RYy!$R Rj%kP&D1R.s;,iV)1 oY(qg6PaɥW3}Nm{%xRU &(5))*,)88%;yg ≟L" O&S/NpٽozI @*D*\gZ !x^h p:vGn:Cm>u KƫwWN˗*a*+JS4|K3'nꥍ+:+OFj\[k=k/fF6S}م ` ?̃ n풡Hmմ};غZ;TgX7 _?kY&3bĄQGN֣q6爞ٿO7,N3vV6:_Y׮HwLcjBW)@oԸ!c V5^_'rן߼7ןοsvܞg?}Oη)ƒ k?ܛ׿<`j4|jaSKge^{}wVjq $/WX_ϳĊ8nNͦl4w^&6h><Ѭwr\ݼx-Ē 8A`]Mƭ)#QG-M<̟ G[Be(J, &6@Q@BsH%V%-(g;Ek[{^Uϗc:AFǎMnb)9KJh Y"xnVKZsؙrO]PcCk4qv65,;yl =t~sjy3Yt jZ!Wu1L-01LpŠl2F2R:+MGJ>*1լcuc´ G$/'YG<& $a;%$ds g`?冉[Zgptw:"1_p8GZ-ԓב|>qk=mɇA+àHHBWGѯ^k#oH10Rhړ?jP.?><{y<9\j!ī#|g]v؁Z!wX(TB%|2]o5k;^pf%^iys —ekś}v4c4S$*H]UЋ8gɌp\^5_+fO>,D5~~4-!vmp,Gtyt}./n.a>nmЎi۷kgCOAfI r3Q hBQS(!hh\( $%)Cpאby, ;k!r ;>+ĤV֊~̕=/8- PHZ s<$!#/-}- QԅC^eFrR^$D>oX(x)M 6Z-rEI"wƫގޣZd\T(FYČĆefvALȂ k,W)qayS2$n:q L bt4@& Ɍe鸖3ƥSEu!d5Fۤ1eWdXXQ2nZ '1r@wQR37JdhJ:MdKo_߈?f~/azŤrN֧eUi?cǼ&+eKiE JTNc?q~rާSIH\f 4sQ?sqm;~o%}rQo-S/+Yڷ_~ito)=k$fإG ew;>F?`?'7$/1mo}IJ؎a{1] qqnQT5ߩ]z礹~_FT| ɨѶEw/ 븼 |;ۗgNwWmBkn4mma "fb cD&kBFyė敺|V^xw@̏{/O^ˢEJ|ʀm6>Ж`ލWo^j@( ՙ3/jwՙjw,T{3* S=k}fxN$f۷+m"&YE}pY#`[fߛ홹Lѳ_ vs},{ j61Hg

h*t:7X# 9mrN{u7>b>Efٹlhe[踨YZhd ޹`2X?ɐҸyo0dTb* 2vD2VX=-kpw뼭r2hoH>O7nRẋJWqTp>|kuEG`33Dn+rRoeq؋N ]K`ժ9>s(/2hOk1J b{UeL٦bz<F2^ ;'Wk-xuxPvA+׻k#Nu|Fm{@T&UfTAS,ָe׭;z^SR[ 3E%XHIwjݜ< R'TNs0'S^Tj x5J C;Y$M@ Wl|%1L5 S2alD"X`< )#KdVѪ@vR0%L5ȭٙKY#YX|44( /J:2=HL1 r@5r[Nʷ;6[X5NHB9zB)UȽ= ٪dd \IAAĨ3y)F}vIo3[M{龦-iσ2x- ^ |g7;$"<>9:oZ:E!Q[E6[L: I@:|ehKI 3ހM)zcp[@LP DwV8+}OTjD7v;~s}.<%[kooK$<DŽ emj [ 9Kޙ-SwV7~-MFҦ L ܸmL?'Jݮnf7OO9nt1f߼4MǭݴOf}.X1P 7v+5y{ܳ_2qtvEҪ7.Xsо knc^:N]-݃׉NCQ\܆\%Rd.|"] DGwN^I"羡(=P/6%b!vb#)t' N^T $Xp% @B8B9Db *Ķks-\tvn6VNu}\J)CMPiRŐ2tH#)RPGLR)qʙ@&b`ZZrNEI*H ѹ!$МaT9@ p2VX~;^)ǵ\۶u-PA3άނHrAṛP`ELbK/BmmkxLmM-t\pP+A܅#j<1PʝQ5FCZFjXs`R;iLjʀWB'&s(jFIpd ˭LE5.]ӤO?y\Xěۋiv\v5fH!4PW1}`ޅ<(3i?`şϛ.ɓb{ Nlj8.5 C 9EEk%j>CpB=[M5HY)XOф"ԁhPo!pR'_7DxO!?nnKW 1o'..?*Fٚ݋hTn]*#Wlr٪Ka %/JE-M9VM*X1Bۯ{y×OG'}x>:8'N(3'9<9pQils r-G,mZכ,͍Xci+uﳏTu+gk@0ڙ7\ѪɗNSs;pXWld:#;,x _BepQw3CMr6] nlk >t~&;[4U^tg9i^0L rAߙB;I RhFn{-ڑF >~N]rہcR.pNk6sq5C#b#GC jÉTf:mٺ+[xtuZmB~k*4];Ǎ4ᣱ8Z0_FEL1VzўuB3+hlRL{Y?l[e bŨcu=ʰ(%\C}AUFBYhAj\3mYM$16)'wU]903 w//*;1Q76^dk5ޮ!m5eZ0VZd)eUj۠?S('S!۟l9z-&I@sMp=Xc-"@&)ZGj&TgѸNS-\*JݑR40^—f@m8PU./]Y=u ; ),>PGWϣzNs,]1%'CxZS-# VBN |lceA/hֹύ.~>]ӆ4{w>K8ٻW 2' r_Ѐ3kKNkZ۽[}Kmw4 'J')^K*xB@LGfL4T)07Pz|[Wk9)ܮh-,$yv-#i.RPΕaʀ+紲}|jei!ץwc0ZiyZ_煳1`TҢAͥ8qH`^^(@(HSAvm[е??G߇MV#˅m&\ I)K8*D@Tʷ]uWj J]W^o4E*hAHy #']̉Oc+<*BXlE?wi[M0E6s\JqB$f=r@ə6RI4DP-#5HOUˍAN;FxT >1IT{EENJLʵ+ڃbϟ?<.LurBݽ h᯻XZfheWc¨SeB+@IR>0yax}0}I{ZU}dx1_Ü+:Ԡ1SQv\ΩFS:n8 ߻ G(/:г_3RV S4xӁԁh) 'N'_P4p⸗_!{pS\J};m<_%F{\S#@X(\r2VujkW5gLS]qqkOE=<][Czt6p^9[ sv==q s"$QA dJZ.]I+],[-_ƪ2bQ| 5ъ)—U'\ҽjag"g/:`#yaj"x=DߣZ ꛐ;rT2bR4{=Y_?.B /N>}tp/NPfNsxrO#2dSAm3[ϏX447Y҂51YW6%gf뎳,gk@0z70OtuA8 l~QWIo _Bep+bPx"/\}L6J6rtkcX^D"'^4Qԥ1QS<M9 ,J0ye>naρjCyB_4Cd0Zx5Jħ1|!S$wem$I~jy5؇އ,؝y`0Ȉ̔%(^aI(ՃmXʊ/8qc4Jf:g9ٸܾSl >IQM+]Uc ibs*BYlPVqeiC/BYTcX20Ҿy†xqf/]TK\(i=`S!vܘD^{D2J)|R'pk>4:,Tr{\bcKI0 B2;فO!$+|m TMtENOG.)h%r!?(hG7W7)'W7ɗzrU>'{yYꗬ{w7_8`UA7AuWgO}~uu{RKz[_ C;1 =~[JDʲv)+9hq>Qڲtd?uҩ{]_ɚ.w#M+JLt܊ܛ3Y#h~%׌YwY=>Cd/_0%a\ߋ欕rNbk/ }pj )Nt3pAxOi%P65N#맔cMʀt 8i$tNxjLuMj!6ӓ=@u=PR`vl~Mӭw%(kGt=$}w_+?-o_4.͗8sKX~LJhDUudmM {!BOs>}E+:&H`Tp1RdUV,\DTLĜ<[ c#^ƂBXLP-CN@ $LHV:=Yvtt"h5K#FH;&aGE=g@}f(6--Ԏ!ֺ̒̏lNq/+K|hi@v{AaDՉ%HY2qԸCbP>Y ])RpKJc`?XG"IgDA&)P[[Z*'jcD!$1 E,!% {* [vΟVg%=>怜vOٔyGf4lkJOl>xG9٥'Fr "l:GIc$WGFKXlڤ7UwusqQ'ĕfA=̾;XĀ:}HkL8ֿ#o/ Glc:…\]3Zns1O/es0~?y0_s0u!\6f q#m)%p $j?xdHieJ%(J9V%Z _߈Saa|C| fPןHEHm̈f~qhVo]z!A"J0YRTv.Nw!; ݶq%jAHV,=ԊN*(b<_wz"ggЭ AQk @ze)IR)d/]`u)Q!\qI~$ē-b.f1B%>O;'|-벞 u9BFISb]h|c4PF2!J#jFjF?i ao #1k4QxH cŒ:& (bA.RBn8Lڼk{nMa8ogiEBr BPjPRXc,+X c4&k WDXh~ӡ |;C7bL |zTf?:BHڷJJ6zvXA+k!C5FzjX)^/͗t%0eɇyvJS9'5d0(8JZ?h q9LtU$Į:"u{ OX"}*sҳkt)_^MU~8,xs(%ƨ`A,dEcva _ۯoS0(]*ggEVX)޳ Z}'ˑOW1O'w|]|_xOD?L&̓sU ߮?^|^ñ~c={^vn|p:cOUruHB=]03ڳLHND Ǻ.\ܵR1c(]wt`-v@3;jRp)B Cd$M6Z}0 eP1@vϑ,Aۆ!G8'gBzmL9huiCB~M`p֞k.λ^hr![ȎNŎvɎiY3T)S2ܦ*뵠~]Of MG{_x(.nn&0__nX?b݌;L\^p|&ǕvPrۘ2e\YY+O=z{H%Oz2z9 ϙMg\9{ Kp>_A^˟|ۺ~nvߞ/CW&8?k߬r8;^OgdV'm~Iio׮,a8.RO7y^̉ރ흹1~}ߛ"mZ쀢BRƳ3+9vGrly#v<Ͻdij=5DV 6@FIS"YȐNTjцUH{&zR&iXy~)6)L o#D9Y]{٥4|?/Ηw}Zpg%KeP6{im?ΧЀ8z~L`f.C3קq4`lsRJ"]:(: vɅ;QUĈP7Y;@); hP:!dIc"(|ΑR*3 %KA;)$JH Sh)d  )IQR #nTb^yg3qN֖ןn.*'˧8`;T1ٱgtԊʚέ>b,ᥕEXồ6^dY*8~kd;QkmԪ@ o%Ύ( #R%:cL_2 >ʉW *i֌{U[Yl?P>-u u1Xo,qeT蕫K}Cs^yWb,$"@Òro\M^Ӱ0&8Ƽ(\Rd_ LO(c1"zRz)ddKy؀0v*{X{~ֱa1h\H!&ј賵.G*eд3وus#tB#&1'QZ}%@ 2x(D K݋fl߽{4C}괡Ҵ+/޸4&|@xVoYH1OD1HE2;%l;@;e;*{Uz2/yV&2\ɅRY%^[#MVVGSt&}2hRe-|LB3Ǜk^{6M2IR_(&*d,Vg%JޘS+M6&m"vrhyJ^B{eٝfw)I/ s8K:L;-9!#c* !yF-xA A.9q߆<?[i튣Xmyɍ/(ƺfB9g:FtOcu4QS,;xt|ӫ9'ٚi&;glEvڹV{%r0#-1R&V>?r)ɯ<(C~ӎr榚x͆A}"vp//??=oz#.ѻyw׷,pEŠz, | ܚMiSd^UsކW0\-8qe8X0nOʡծ9yqA\~&#͇AXܢ78p[bHx"~_NԆFzJ'A~PbQ2)4*'$d DtBFҼefUF%)lϽ=K״^AcvLu Q x wjF  ,G?ةr3M'og C x⚳mjX=ub;WvGN=xeZezUô&aڣB0I./ ܅Z DeO~ ܒknW^NYٯk@Y# 8bnJQ>E%m_gO}vuCӷv/ nNk4\.F{Ͷ7 cJ]"kZ'#|8ո8ݨ!rB?mF{-Gt&DcuP2Vﵬgj'>g8^&^i? &$ ҲdP6Pޟ}7Sn0եM7OͷY+ 2ikLɥU.pR$} {/-VeܕpFlGB?X:\cz:&K$3N2\d#ShYق)I!4Cg,:-9Ĕ ӹgVO نJ Ȋbr%''ܠ%*zDfB3jt\MNʾV=grYOy)+{ EX3WVНwYm+nrq;tUo_i;.x&= isc^hA(tb4ᄕ,)*c¨ژ%$zR0(z]"ZҺeR%c5r6h\VKي,,* /)*krG߆ŶZӋi>qp:{J%1#34sQ0齐H41h$ڈ,1f,069jjd7-{mr%D5F2@d 2!)&HMZB*gUd,@-ec/cp4I6k2"RJ@I'Q&dKhj`@GZiωF32\{!4K'ڨp'taQSATDpJ#ϓ ڟ<\/yZ4'Q4ϓxy$@ԴX+:rAdOIn PߢO\GCAL /2BoTY¡^d4ؘr2dch2Vжsa1=e1hS) :{mde "Nd0jg"gJ%1!leSgb+OӲ7圇*;.˦2ČĆmt Ct Hh5) iR`:[ i5PIb1ex PҳXEPNb1.* !ˌ !ȺulN <[Ɖ7P -:fI mI2geeU#gC^1&0k.J,t":k"w`8Z xʨ02DC6dK@M|y)-?q8NG@S]! 3%H4sR3zUfNv5?HF JcE4 +: .J~9,R2^pT;UO*6-lr*RYCYi-]2R?gcN)G@̊TM/ };|REv |vY)gA"z02Dv Yu8])8z@"'KelTqc!*CF'liHiA/pHe@B8$)r.lr6SlDŃMI4 Yl,*h1)T_9e{7 rnYy_UT|RWPrs*ўJ{jK*SgS}o(vK6/jRIcȍu4L5d`kPd&2EK=EZv}T<}}Dڥ!o -}aCΓ)^G%P!]tUoٳ8[&~oMCuޭBh%L6wSwls~d6$~=t{K[Ր:7^;ڒvdأdL"27ɘ\z'ɘZY'cdLŵ~V ٬vƍHITH!W6oa~z[pOV> O:WV߂9gsfVRɜsdps٧8c ЫI^_W"@xi!eɍeJ:sap 32qƹl6Q73dwJ] qC^>I9\ t1n;拟O4ߟ^4`VdY7;1)yZ0r(Rhs\AA쀻 jl=@n>ӑ0\N˷eJzXAqwkE#ߦ",&X4 Vy2{`0oX=B;pue`r 7t픭>V~i`y n[CL/]dz>Y/vIp..b$<"݆D9':g>VPN9N{y}tg3EǺeݍu?{kf4&nKTt&fOl#wfvŃyY"jh'Xk{hvu޾>w(-jڟ8<x+9N*F_.wZܳB<'v\rEp>8/Pl6 sU֟MlB&k~VGni>a<]QmyYӭjoX %vujP{J ѫR鏃뜿p4:oVۮuGIt#So|ǻ|h$^/8->H\Xǯ2%؛dB N"g ^'dp Q7= ;ޯԿWg3\CCcb֖8\n/ G} "ܡ&5Y [2} pMt;1fg 1bެXmJ, ڕr,փ!r#٣)qzf@+r) ^gP}at>-gaX޼fUWQ (JIH2$;JI39?@Cfke{]J\[ i5^JtLBGxY5ASbcNV(gd ]6-v4R@iFc 5.pEphiF0T,&:6]~3oy\<,r.p2YV]U\\q vY -χ%ͮf5ćҹUJ|X5VRJ|X+a%>V+C厫 ;rUW*w\厫q;rǽ|#R*Щq;rUW*wktF)JTKԒ$@%cdL1U2baF%Ssn8 P gio97 k~ppX,S厫rUW*w\厫q; 6ʀ5zS71U2JTɘ*S%cdL1U2]asJrѝ7XRq{3qm{3iM{3)5ٯ7l~WO h]Vcj[;rUW*w\厫q;n=YJ};:YWg9r6R 6FtlGt9dQݻE2Kܟ]џ5qduX^ME58"8M$ RhI"2/ 'b J/}ٔ7N2e*/On89Zw3C@T乷'R pQBd$|" %;!{6íAϠ`)%S)URr˓6\xU&V4WR(\]`/T;攫{qJ𸲮5u>JBggs8˓^8VFiCʣA1ISHuogW⠽C &yrroJ.+ y [qO)X-#(TV;.sv27P62bCg%&Iy eSB X ɍK͝qfw3n) \j;"'zpGShIM=0gYnari}wyf2O-ٕv)V<w%ykS$aaGDSGv2nhrF(L36%J8Dg#1 qq"b.n2=5$Q‘~n_kBsԁ:k[h[Kvx֕Xէgt6#m]tEaP\su8Ni7`nvaכ&S;(I;qnocdm'GṬWl'<${WFu}w}Vͭə@GH+J#~Lࢋܳcm%Ju6w~xy/~㷟?gǟs>lSE֥n4_֜|逯s_Q{P}o h~ ipұ~8O:ݮaAva'(6yu8 Vߗ0q-К>}s4Fc)FMlVmJ$&P4 }!D5Z#m#6HSJ/0yM> N z(|yB=&@IaZ.jH9e"I|%3t؜@jX ?vAhu~bp9|s߻~UaMz !g_Jړ1O朿$^7_a%IL"MΓo516&Sڄ>yULXw's'VZ?z]ІHEE1APs F8QNʉUgڽbj;ibџ> (9cfr8 YZlBRZ6 M`hͶ\ՆnrQ[I3:s? BKOu '5FzmL-é dz`-XBO%0׋Y\{cC4ݪkjA9:1ZۧU 0)5yRy4 ; !y jJcƔb" U'"(+T49QN*Q)Q#s,  C>; 'p*N)}f1qVh]ƎF/ )1ǔ˨P"Ι*@g};NzϢ,Eq2pf;Oߚ= I&r#9kV/%j.a\.6;jYaͪծl ST+:m#,2%t1 $օ$wՃޒ:VuHE$8BT$<(GhбMմiUӶia[P|90e9+5rGœ.mwt. (YKMLmq\y *6s`j ?+%mA~k]dxqU?Bl}O`N`_f z{LywI[y:QM}<>|5[^lؒL'm[ˣn~顀KuŜGYeUoPHW :P&PIajQczVBo_侂o0^ywZ˻/Q&X`CY{[=Ӫ&w-MR6?VGt gE.!%M֫{fɊ>ޓ&T_3 ,4n3@Vyg&sVʤ&K~[rri-˅>@SLGQl; 4 \&'uE !!P;n񿷥 KFiOD Ekq})cq[g'B^>Q^7"Ɉ:-vRVp9x e^120LP[Ls9p zv,ٻ6%W=b3qWẁIpNg` o`Ғ T/dȡH53W=uIN;(l"ȨIZ-Rb,XbT6%QHY0#gpX+[! J/] ebE;c :RN+گF!o'ǽHXL]B3v$U !m16o)^_ zϜ_#LB P}/F|" y A @ fP3O{mO5@%*,9G^P"9,,+&ZPG*EG`+NIt;@MR ?12 e佶% +E1 3m ET2ц3~/fdrt"&|[Pњ]7"Fla 0'!eJ[MtNf判laCQmVɥCgGSGcZvYcLfD9(0$ ٢O1xp)L:MYRpI\>{Bo6l1^ SCTK_T;SW-V)ռȭk[u~l~]}XxϞ̦Kf[N;>1=N> d5{Pk`fEM[R Ry J R1M/h u/ȼzRbi<9 ݰMM0PK탥V]wG~߽gشd)(Ybf Ejcz-,@66#EMDmCA#E{ ^.]P?[ց\_8*؋ϙ:Nޕΰԣ iviqq֜Q2&ijIƬ8dJ?dL1dLxxgzZݤ!5]5\t֙SWӕJo{5i#%L/^cLh՟K!ׄʒ7-/ Zt 0k<0/B%7eClb}:5?|61eoSG9_3u_&Gf0y{y}k>wQϭuKF X7AeYD6}x&]=7x͐e܋e e`-ՒMU0h}R*4H4Rij5o*)JD;Z)9'PVrlJVj?uPX}ۯ:+&J碮\XURBBuμ6Teʿ \‹ V&k XlPHxu*ZS`9_R &B b_: !$PM +)dIC(l4Te9'?,4v]"s]@̻!3ٯL0h6Ƥ O 1똗-7ke)՛커{^wϻ[?"6'(X_fRt A %cGFȨU"i`p@J+ȠvH&ﳄ[{N fq6衒\CSGJ{5%t_raſo06Ƣ&=LWu]Mvٺ< C:h|5?)k^5foX5֋W7i0 ) jJ3kx5$V,m *QZsQɤLE%oI)vs0Q8"|z^}o/_L33nVnC%D^nz9I9X;T` Ab|֊|* ) 6Lj~|Fjvg'ry:r^2^0{<`{ ]#W/,XP[v*g$ F,xf_^^P&FYC {ֱQJWYgl)g\}lKQk 36"P R&.;*d6 Ncm,eы@"KBOm=u;$ ՍfOTN8 UR0ZA8~R'-_D%#8k QX[$eD BYh1,`{щ8wĎ/Vwxyb8L(>ږZbY(;Fa?ET2ц3~/f-zr*m!N+CEk(]U=6*~ {u1Ϥ/,Yr|A/݉5HGgtȎC~7jX# _c?i!0e"H԰E@Blr`U#sUy#bf:S|bMY6DjD$iVHGpiCmVɵfGM.u:<,ǘ9sQ`H Eɟb>R36eI'gEVwP(Y +0 =AEoh0q}`r  ^͋9_gՇUiD{ϞYN;>15?HL^2 =G5meJ{Hu$HMR-Ͽ.Խ#iXJYFL\(t65f4C-{,Zv)C|nc^HY{c(d+ Yg8$Kf0I+V[)ݲA#E!{ ^.b]ڐ_g.:o좚,ΪJgXQdU8Kt[#VPãieICC^"2o.H+]]5OO9?0|rf=!hd5ͧ]wf^ ~xi*3/\њfWے{ބGüY"'gxE,9kn6t"1iϖB;#}wISPe˭_6ms>E[O5 E!)IM"+=/ *PulvW۝,MULr9נA(|YKNglRmM-2iJYa܆>dTBg.^ V"Ȭmv>VWDow?Dܽfnf3CMS/'_1K89I[v*0q(J3,c\BkC6 ޕ<ꜝR(v ;9XZR#.!(lFύ")Y[#a g9.{^U-n~`嫳Y)ڮ,,9$EVg(濶 "4+˔")F `me [$ԟAǛÁ>(( 1ePJpe粕 PEȸĜQvc,xo9w-ɻfIKz>:(! %l*T>jTrXCg=1 vIrNQDԽf-W^eDgMTY"Φ +M+ ˧?.ԸZ_Ț+9Io?/4Iy >AE≯ww].+)$<ԗ-cƥ8:▌b9;:1\#?Y\'ώpٿV38+G8P Gtx?0iCO pR\#0|ojOݻSZLwvojM"'jQ% `@9:%g߽5sD[/]^!xqR5(w㞦{f_kwݿ+?>\NN? !DY͉9WaO6{+boEj_.'uժ_p~[+)Uˈ8refLp8u`& ׳/'knN3rU֯:UWBAd, >]2W;~{$'L'~k&˽?J_|?~*??8Oxc.x c) %]- z,Ϗ q켔4Oʡd?b9K$gMST6UN)ijTKb; /ƭF&Wbm 0XA3G%d F!d"-3L֒6gPt1WMF6x_еy#S!Y2 4Jbp.ifT*f %l6ӵa۽ k"˶CZ/$0!a9,>DPJ}oXNTn]7Ӱ+ O sA]Wٟq58 ?[0 CQGn̍X'u*UAnv8aX#4OgզJ]ޭ7n#mpOͦY+at{rJEnWq?e撼?.hmN')'*$Rb`+'o9GB# #nifukU,a:z'J'$E '2򿄾 $7FB1ƖO0[#aœb V ŋ5GT5>TS|)ܾmdk㙲VQ{j[3FÚq=Jy]gl yPNTި=,['a8Nݿ̀o7yfA˜Ec;2Ό bkJxvQ-0heY6'dB66QKK&Ǭ#P9#YvԶyXc0-[mhYkNkwvLJ#!/&B r2@l㜤q4!g 1BYA -pE2!CȊ 隄,jIA`I5d> E26F6KE-E#jDٲFF4b+}H-gLEf+emLFG* B6F22^,èگ8l;hW}'i@(Zl?vGuhg]uEj{4؛TWVkѧ+"Xrw0ꪐ PUְ}WW]Iugm}?ht{?eVW<ߑߨ=_Z.aQ'`RhH!XKZ 1d0&09*1y'sOKiU:.jU4-S>@ 敊W. VIJek+"`|mUl嶶ڱڊKK͒R0 Tt4Q5FU%˙AX!e(.q[J;x885ܒ~<-O6$Ȭka29gLuƑ[vt?,]^M*iy/QJsg;۟SklEMӔ^1]d8 Z=dV|^5Sa,Fo=UeS1=.ؚ'y|A޴?5TmEj%jg;ܩ5W11K6f1*rO6yc"R 2%S,p[ŝe |iD!xLh-i;JH 1 2l M[Gx`G#{ OqFd@"z2d5hREΊVYyخ+ H')#)go@0&zxP@Ab"ULwmH_emG0 vg0bl`Qu%d{9~n=m$[-;m&lVWd=f1wj5n^# @ZR yô˜H 3:9,풌T;oeOIU<.8JrD u]6A@|($ɤq@P).O*E@!aNJ^7}~;%6^ouہϴj>\㕗vvǿ|@\+C8^y\(a+V{e;^}4V=+pk*RK U@$VѪ!BlbTna©ЀƜ\fZs΅LI\"jO@Eu(Fv`2qydL][F5 +;ut;pn,Z+|v|ɄLV%&DscjFSYVv[y5.# HKև"[N2ؚ]|Ů_o;CK 3rgr Gh1S)ߤg!0ۈfʗn^A:+/#*R%Vy ۗg_ZiC]Fy['c7{.ڥځZZ݈RȣRdmSB.h*Jrk)>8~[j1uyctwo|A(֘{X.#QGSʜzO ײ{wp%GM fr7=`jä|dXBf_/T`>𚩲E!۲* iϤZ8ԋI-8H83bRg&+A!UHK:w wf۝Q'vu'GN`g0M0r.zL (q"dxµI@$Íe KuI뤭M$BFcBbkeZiSӋbl?@ ]HWrM&*罥B;ꄚk.ygg/ U^#gGi`V:!>m Ժ 얪U r#i \Bε^ !eתTTXus)zdK2AoͺB,YaYݫ4j^^aJnhC}}eΏi7}薊Nz i>5gݟ۩0:TeR?UKK~kͭ7p95d)o1܅sS5w]}{T0+jxY[q\+DG=T:IZTI,҅Dx.ChdIɦ<*% i!ĩ\Q[.JYŜ` j"Qub쨻 x+9•ո;%~VXl74nVa|>o,D__?kZEpk;CѠRs%73*֖ fL` ,ku3`eh $pTkd@ALq"g;YڢoX4MzQ%N|&͒Sa""e^-ϗ2rB4*C)#3%ǭeѣKڰNwgWB>  &Y}I*#$k]  eu*P67u>ϻOnimkYs"RJ*J.ąqKFzPI1&8bJ#R.6q ֒#q0-NPD2V`ΑfiZGAC :3B1ş\Ϭ;MM}4 M-x(ykjPțSQv;"'gGp&'k'0QU(K6\iG<<;ͳuZJm"Ğ z|]ZLG G۴Z{7TqM4y `JawM`i\6ZqKOPī?V4-kmFZoX|z28[C si?;oƖ.mTts]Gp %-[[r}KmͰfw,OOȈZFQ+h8:ɢezC᭭2Y'Zmv!r&f<8}2\WFI9VW*P9 soNNoO)e8;\u#p !h#aIV=_#@AOhڔjۛ5͍ءiJz:໴+Kv#բZZ3{?|nOt FW9DL}fq^=d#_6^VQSTY&o!\Zm?Ӹ4;uU45hmX^D"')-5Mx.$40#"oH2h J#s'\@S(=LU5qO`;(f:d'TE@5D /h%Q2P'$ލpC6")";s/{fC9@vjX>wbS.DENΆyl!,n컳!q?LI /9]q&6*WYUYSs#B9Ƽطiԋ[_/! .P?"|4X !>6*bPU "$jJE-eڽPeÿх]ZhW2_l;LHK!!bʻAD<~G#*PS%E%cUAVX&(9qcq?}ԣ·@C;wAwa;yE1y>|k2CD7R)}r)9@u/M1-CM35IZ&E3V;ũB% ڲlE2Ed8aI_mAxx>TyĄG΢pUc\Ba|71'<& =b><cTn ̛ # g!爗2Z% 9f9Iш夑cqʘ8e(z `z !9T %R%c1rK(YdaXYHBN;YxPY&o$nnsu~SrAr%<BP`Qp@p9\4L0.pDbx8 Ys=,MtP«fNc'YV̔RۏqI<]lv$>^ .G{=8?.~ɧ7c؏\p|ܙ~TŘNLVzJq ޼k̭yjj3$W叀ЃE"[ǡӢvXeyfղa*zT?^{Mw~ً5r'} >c}Dl=,{׷{줇ut3z5st)7[+|橅Wy5kb<| yި+ሯ" ? ۙ>Cd^l tnZrC]]묓I[kY})W_;ƼYwWo:xbؘ"z..L-SzիIJmD=hI- ~;X0,ږqW7(\u'I$:Hn0<[0IY$5qwwem$I~nLG%v`򴵦I5)[7HdR,I!Uq|G>^#2Z=⾇֜ttm2o ]|nEllo3Im"bwmf ˇ=OQ)Mtɖ~E sܾ~ăy,TLD t2mW;s .֕}ӰP|8q؟8)7 ˟D%1%8x$MY"-T8K[ǀPlJc Xߛ909|Bo:-ˣ%J_,l:z]uw<n}jP{Ьx=XR&'vװ$=ĺJ)"*p8~en}X>SvA;e"f;Zp̤ڣIPY%cI{ &D.,Ik\?ǕOK[RfT([K(J胴BLNy4.jg1yXC p`/]9==ѼHv/ )_韾C#qS#3"Z@Sp e^3# ʅG (1Hn1ɽ'T%x#h&4R*61/,:Vf_I1 'HA<ʉU+btnDFs4φxKU߲ S_ol4DZ7>P?9d0(؄ fo%Å}7)ECGjOd 8L1A ꝫNBekͬ@V w(roQe,=_KqfImݝC  EZS 1PKwQß3&HCꓖGU:ZB=Ug(tw5{G78 Q8]}]Dgo$z{ׂ[fmL>Lד5+ǪFa!\vr0mCGY+xP,Ԇ3Z^tTTct {e<»X+DL2V l[9 Ј5.XJ?RÐb!;|Ð0RgG:YoVFEi; %6 909= mp<˄vI LG.ZCɻ{.ɢ}!c:Pq@GBGuh | u2-3RJLf1Si|dPs#NhD?N' 0A_A&ͤ6U156$ۓϾE WIhjT1xK&NY7?ſ#c{cl2&Ao}U[#XIޭDePSI]M_~9}HrLEƀ7ԒAR=/{={̛Wq@;fћ^>' T1AD [W#=l[z*_דpmoódٖc{q_|يLUCWfl"Ga5T]ڄOY=biBx? Ûηm fsBPUr:P,K@L-cbJ{P&p?cTbZDqiV6fHR gs9QE]mz{O <S CA֝.z/vZPw7l:"_IP&s#*W)x5︳) {Wl}M;H^>>՚w^g^݂eRaK]8twZ< k߯@dRo/h\W9doWg|6چlcGSh~/^n/ZpHvPޣ/g|gp3'cnp|)|G|~Wo.Wy>/H…څ?28ZKX˺wdorqqE5,i(zLs L/am۹;zq-ɶRl,qg Ij5sӫ39|G|%)=ЛOx\6㥝*D!,n]J7cP ȭ 1OW.K;;zlC /qi&S0CFgilm4CVf}d%i{G^ڞ>(c{ Ye{4S=ٞz}m3%_>/yBBD&O(0 L`@&L`@V}fR!0B5?8LSQWZsqLH|PWP] *̘9ur**So㵻TA]=Ou%aNa86G|X''6#b0"\si4^51.NQQu93%oo=n 7nϧq=PRhn'M,w<&n dqaGI<<0i`ţ_7FnNwʉrBqTg݉452&ʶbDuAA&̏z_~w*+V* ^i͍>!0Ù\y2`8S g*#V4=!u3.NF]er=uUU )Q]iF'2d An6磮 SRHss)ͅ?k5{.p/=U * UFzQ mPLŨSTK'GXf8-ŨDJrAGd){X@׉tzF>Ui  ;pQ]y5bcMO/~kkY󗋘/b,^yJw%ЉZ)U.Y5AV/V߿xU.@dF T\hsUS:]Qo˂B)Xr-Aقu[USP/E9# Tj \7*MwZ)靅 ӌ < EKd TkՎFH ը_u Դ.T͞hiJ#ATq1rz"nU [^x(ϝocC oYX_KV>q%|4 f|Fn%Rxlf8Ϊ/\~W+j΃D!QtH1}DRj"P2băDRpc)j S=\wb`P.RZZxLD+dZx@]M#l2&TKn'}ݻ떾Y7hicU:[ ԧ!on!-JQC+`qcP HQWNwn}VwY[MMq7K7v Pԓ=<#3˳>G(}ȕNB~@IҰJBHBCK/(#=07٢L|:w%rFI.1]B4j#tϦӍF,a,y$Iz1}J|$T$'@;"Y;zr5+_}5kAZ0uDu"~<녙 ?uptDxЖ eV1'XtGjQ@s¶{1pݧAu5|\ ~}5޴@vBwx7cV/^|_?ri <1 F`&EGfL2$ĕ.V q6SWK_r|ܦhLn,dY O_L¾? \֥*Pu!T!@|q[Y6V}r}qofM{_y|uYXd{I*+$kJ$F#c )PP7^u}/Snmuk[Q*RJhJ@ 2tHcpIl1R)o6qi֒#p0-@'mp"+$XsHsAPe-ؔvƺa^Fן|XBM=D(M.m:H6& !G˥$\(JBv#uO}J3*rM1]P=!%awVq);qѣ2U`DHŹWTfwAi,aSIX'4/qaI;h=4j8L>I1iDN2<(A73CI{JH N1_!+NAc ;hvrEN4>pfxGOfsd'qd_Ht1+M(*/5mZMdx|v(Vߗ׉ 1 `?֞Ռ=%<ٷjNmkjHK.5}ۤzkk(BVjwԎ4óu6O[UIY,t47gٮ8?]%xy8~+ZIΕ||ˮeX2Vu.,COj'O1Kxyqг5);WedMvڹW--˙勎Dd y4&M=>VO*X3B[3p<OCӿ~pJ9}N?ΟrU.Vxծ <3x KRKz[,-X]ͺ;}H>Rݮv_֚8}inry+Zշ?Vj6.cr0 HMщl5Te-eYAb7E/C(FM<=P"Δ&9p4uyD@1ȍ;ᒍ4җ8,x^Uuhzci1Oh @3Dӄh%Qd8CH tSbgWDjOu6É;am'I/su4*?]a:L jeН0[6%jWī8>vWBD'_ gUe]HUB>1*}z %;@ u[1h*dTSE@rwpxh4= +N¶״ii-s0)FuL],8(JTJ*@VX&(ŸײI?ޤ GPNݹ7<7 }-M}&SVcmjg4M-)xDx9iqK|DjKТ35IZV"TD(,>QD,D3KX4˱Qxx{Ty@a1p6h<˭G)8_ɎuRk\WmcͶ#ʛM(<:G &-Qjc$ 8NIxQΨN!$Inj S'#uHֳ"cP4RL4g,fθ^K9/,BZr½;%p-ĺye?f@Fp69<.hQR`ʙTri"Hca9{Ť&lÓc pgmrA r :GE=FaUL-̱dVcŰcW sms^]>Q@͋pcs¨Fhh RTO(;XQ!ʆ%4ZFR2jT $Q+]8am80#Ï]9"/y{6 'hzd8=.h"`yR/@m VIPBHUS >{Bۡw3wsc""yCnfsQ7cL}Kmc.XAcwaop4-KBƃlocm{2@l|\O^M'7\UտCێxkH9KW6sDxe!8ѢЀq@FF5&yVO&BQ5eyq=]Mf>_B!+Ky}Y E^w׋Xh;nŰ"m]sF~?:ƞ51 /馧E BM@C.q% 9HJ)RS)bΒ+A7Yr@;4;073|^P8դ#O"|Z c?j:xֿ{|2{ pyϲ)^zD7L5.L(*!v(ԓDjiy7?BҶ4z8>f9ct ?b?̒.>Tx/PdX^Q+NVFog;f;] txӒ2g$EE6d8&lG9Vc֠ixkZvz'NYUiEXg Jh'՞BaWٛzWEo!I7@]U% Ddƴ$S4ErS䬼<8ə#^hmW4fol cV 7v,iЖ[=|ֲjΎg)젇}fЁ2 #LjCVpI&︊A@}#:옃Ο|(X3 AEn8ՑpIQ[ HО9gɀ6D.d >f涸Y&kPFk!hTB%L="#FJm'2BDA`52Ph ȸer h-{-FkB_8tZrDt0ЄPB%1xҡNAeݩxmu$9ҳ%9R#rc)MX2'\$ &ů ȉ0gK:ꏯ^5cH-m<)TsXpꄡڢxJy1i!uYedG]$= $[F>0#szfwu*Merw 1s=(fV1U N%%.!excRI.I- iE.,E1Q*=(8NxyG ^*f x1̀pmtJ@0OBY*]4):$}H9@ P/Ga*G>fP؈o H\YLf=RXD;:h=rГG9X*cGJQ-:AgYu !U=d 19ol4ױ6P]R+9d(؄K**;aHu߽/^}n6O>;t k~Ϟe,7w) qRVvVM7;N35;S9XGi#wQ!X=*ܕG[yK*nGkdq3pU;WZ Ws]B;W\vZmB%oJD*J\4%r9ӻWZa < +c^2Bmq{OԹ=r5 v^/8d4'*: A/F:p0i5{p @G筹&!.ݱ* 30]ŭ*ha´?qW2{g%<}Z>KMG:~j0uy1/d;Dqn\2N|[oe.HSg{vFMAfK>2wE &&7Z-CR-*+P>LT痨kkP/ꝁBz*TZlguU|gઐ;?V5pUl^&\I-on7G/gY>)s:Cr͚3sܦdeCtr *0oһ%acS7vtkKa "fX2B.B>83EF͖.]BP2?uף:af3M=`A7" +:IPGdAҟ'oYRg/kjKGtU*0\9ue^ّ,u u 84Ð+ 59D$ EPBnqǚqkvKpmֵ/-P&e&˳.fCiq *@U%" ͘-? 4۟J5#H&X1gpRI*.HEwxc\:-g`R1% RҬ;VKQKQ:t: xYS%yȂcK R:?7~BE}Kuؑ?$"P|"oe(4 0t~ӊ ޅ/" 3ҏ%3kofevx5yo$[FdS3Y%J$_sC@If?AZ}.þҩ]g٫,9E\(Kj\;M٫ɍ p<?.\u`si:.8>,r%]t\ $U=X`|zv2H%0d`퇵!gsjOwɽzjsI)5 zΕ9MLn/E):\ʻ«%Đ(91n?''k+.s?|q{[5ʑ,t0b0M,2GĘh*M>.f ={2 vIu\QK0i&#e` `XgrO93ԌcP|&ser't>;߾~m?ۏ\WuF+0Nu\EV=7#6 ?CfCKZ&g=]uƅ&bܛr~\$/7Q*uRRkW-FW~GlY0_ | 4bȆx!*.\'߫Vib| ĢdRh'$9 A"[: !# BB#㒔Nlϭmag"lyd(0Xӯ! *3S@Wc6R`d9N#;[բ1NZ=d{g۩\t)Įj+^>KK -VNҸ+ZDBZͬRJ9+be)¶We)#?mVWurgוٮmrpxw7ܱ&,ʶ*QhҺWV:8r[etC g\{4z5->< V^31%, 3pR>J%.2b2GJtiym"ߺYs"\NKvⶆms>FhYyt)I!dĭ&3NKbB!ӹW%klCrDd巘\ɅOʓܠeMq!zDfB-"g'ú4, b7' o}xxQoۯo&PVhq'W6jS f˖\b[w'REHޣRJ&V L'Fp#ex"6c3Yr: `֋)jRM#cc܍[)ςqC7XX8m|^I\,!|wBb/zoh#-s^K!d 3$il  ElidĒ`{!EM1hu9f#frd'٤!20b7F݈\n j7Em0j[ 6dNLw(b-$KDl DEi}4i+2䊼 ĂI"ɋT0c9 k#n<>Tq[1x("ʆQ"IAK%B,A64"¯Y V *iNTq\J֤[[}b .fSMdvMˋ(xʥRk)n@/jE6A+'KjJHbυk[Nra16cQ)Nv\$)x˘ERsr%' hFrxvꖍH b6 TFؙ]_ۢE)ޫ1#FGDJb#yX.:%!Nνb:"ۘVd ٖN[9;rI00<Ԇq`2/8#cbeF ڄ*fN y#'nTɡF&Vh! F1MJhH'ɜ Yc)gWDnVrFR0( "G BDgMNlG'sv2FD'.V Oz ~f'Η-tT!;@.xx0Tͣ̌j ^'~ҵCzHFI21Y c"IVt@.]zzq6XR\e-p4[Uu1N6-uԴLbii-`Jx.z ͠tɐKFǜGHJ#"D[W_+z[s16[2 *a `Q )O~\%n-ȉigx4'sclN6!p磢XN[juT 2KHhMnK6K6B) U;RAUY/7b#WY$Q)beTQQ%:_D΍z}#v& UښTeRipnR<`tP(~Oa0 )P>W<ڛ[~w޹LKѥ~If[: [,u>L+{ͨqQ9ic5qҘ䢦L3IQHEZjuqUy;S{?M z6I荢%"UtmNojwb?&<_f]j^R56Lr^_'o՟>u\g+%fjgr>3>G=:}_V*wO'ݸ;`C^2w;sRφQr{Rp~=a08pVԯGvv䭋kZYZ0Y[oT`|MR'&W*+댩bPd\e u\7q\<Ǔxfsn + >u=UG뒆s&QD buEDV1_yE2:0lkm`-2~TAM'61VάyH99k3c欐s$ȑGsxK39.R]&BJpAZ2XmV\g5{A@XcL{52E&we͍F(Ż%H\x7H(o")(TuD먂L| LՐ"H_xKK;Lxf_NѱcMxOv.QEQ;qF꭬׃e"t2F=d.O:؃ʍ"j0Lꗅ&?)F^1qθ.䌍99"3Ջ o 6+b4M@bb*iϠT ).hPH(zMң`X&|5&L5ɶ.xh/߯ 6r=,c8H-fHxQn.^>_E%ɢpX^#2WdL:XԦx"j$ b KN[X:/'Jo>a%D!df bMo$׎")R<mt ұcj+h aӪʚX0%i0kU"Ș>L5Nx6}ޚ zY?odqڦƃK,{D7So-}k=<[||iU+[u-}땐L)GV,Ay[n,{jzz˳oKlmvgF=:b} b)\/s)e0tby*^yXЛ=2Gl9v^Yk2deRcVY <`W֒%(TPzNow@J?UТEg] V ,PԵg7Hu!dGx #^x^Pq>=78Z#H*7@ g+#U2; :f!Vm_mpl]K|=,jtgO8 Af(2|{hD/v.6~t1/4*ܰHsE¦Z=WTep=X/Nf*+% -ǼiϞ;݂;p K|Ki-_xfM՛|JsL Be3 4#jeF"[oR;m;w^'lw"u[dXorlҲQVJQ(4692;Iq> 艼qU ;/ZBYkUxy{pm]ES^~(wΚdH!L%A ]@蚒+q y AzMMɲDgs@VF&0I*I&J*(Vnҕ= 6;34 چP*3a3hOyOJGI%\ݑD(X Ab fm/֫h&ף]:&?y)K9˓ ʌObd)y )Zd LVہRH]P||{`31$S PM*6V ' $P}2MjHZ`ų`~"_R bdBȐ,eObIp謗mEEPs|{sm᳷>x mrҦKAHv+;<$d\%OԬv !&L Wnw$7Sz"owCթl{YN ֈxH fu1{o:c<8:xeCJY&JTr{Gg?z['vڥ]d)3|DFzaQ#@ !Hӥ0&*=0F.S·uo{f5 R,V`;ⶾϾV"f}{-$aX{NPOC,,0ΰWqe*"\4g,ŒttgoTäToBI؀Do<#]Gn Tpm6 lQ8A(uG^RDL>S:yQI!%!ΐ2CpI:!c ܳ{rQ:-I3i摙Dp'O뙣|;yojq#>L./gh3fE5l GT?o ɞ FlC8Et3VB&CYEdMrV'Θw >) da83JlD$ɄҰ4ҳ})2)m0`եcٙ8pDL$k<;_;XN& 5T5=T̳zܢ;U'Ր_;5=`Bd#Klj B$ۡvncDȔ$f*g Vɥ5} R8#DO^"L uS de,)I5v S;gיmދ` FA'Ď9g&OkT",4ʈi ϗUW_NLQ됮L1@ȥ9ty@huS?9d1.-HD^K7_/3zh>JrlVlx|[ *EwJ.oW5xT֋P0cax.>߽͞9͝'ͭCuznoNWޚ}OmM E42oKk*F ZT{oΞf4 %Qs)LrXeR%k21I"ik|Z'|J@b-hA90hR,]E`UEkXDwL5}ss~1~'x^β5lCO}W'2kulO_ ٮV I;p6L12RBLBl DA<7i) !H`jPT|VP)e2Vp>pW'qquK[w;_s|ڧ~EYg|r3UWD7v?!3Ɗ16Fc&3҈mR?!˒[w.³s<]'O$ho6S!x8(&+E uX55c` xmYR ) E6-p3{JIceP7;_2䈹h05nL&oYMJ1cbC9kӫ3ԇ)CrsFp+%D(>d, ឫ3* :¾|O+Zɔf_)Rx#\i(wFب@+3ӿg:^(V % J1Q˨=kYd,ޒv'ԭ"^wtjƸHJDJ E%T`JPhQK̊mFu]ٛ_/~c{g跫iO'qt3Chkə֖""S#XGz1Г6g%BVlk­əԃZHm"ת&lA?`<; y`ǦK*<'wV`4{}'<Ͽ˧K?/J~O_?3:\XpVTN 5?~{FӾEM_]e[}>޶;uhfm*_~2>kO=t]5gяTp',6,I`rULC2B>é !^q-I_1H2ʭò@-8Y$`_ciRJ##o]Dv5zG:B,dqXؼf>?q Nv;!8rN?dvCL,Y5YF0*zeJT;Lӷ5;s7:6[ w>ku~kwX7|qU]0)M x'7$3.TtBM'@ KSbaL?nje:.̐L1#eH ?ZUʐLRL>$3ݐ̴L=`:|}8:Ɨ; =FNjm ii!8(3,/U\vJkܶT)Y^> ˋ6̎(70lr4l|&4 t&f"4 jͦ1&uv`n(!hukVjNO/K2s-Ew%MXNw'5g0d|I¸ N޵Ƒ#"<9f#d=fFc<i %OnbaKJRK,$r&*23d<j"ȍjo>HtԆ$Ƶ{챒(Zȕ{*6ؓg*%A7 \Dσ swtꞇ8LOhb8㊓2{,x|wPs1LjLDhs FǀLjFcEF祕R2N4 Sso"%V?BjqkXSl Rq8w#,Ypf< a,ZpXxf羏f'Ywe7nS<99<1bGv5L(wUəX`˾f^_=Fv&,6'ކłƦ;lZ'Qr=5i'J\{4ݜt8w#v:;-CAaP3j 6O 2/cMD >$Ruv@ހ-!9plpc'AzRY!zVQ)d>Si\LVJ]q8wʩ1 Pq~<`D3"Έxk|v0dM츤#tcr6BfSm"}ˤl(+Jfl2Y "^L*){:[}C"̣3#s7"NmCZ%E4⌋h p TȣwD#KbR_v&f\.z(x(x{M^6 ku[TL΂_[t,z7-&ST1E!\zu5Ϲw!>{$?{ҫg :7H(%TzGvꢾ\_䳒We?\ A {#I"Dȵ|T} DeT$1%N,(wNr&vTUFUØ> -MDŧ6 0%a)M +NdM0M{HR\>Qٻ ̏j\^b䅕\1Δ(R&t{TFVoاUUޝRV1)D'"Ó:qJXЪ5\1\R⑟)}NNJTub nNMRL8e(̔r%ԽRՍ %ye)Ei9h͘^-VEc -> =1B)TIw6,sIϏƼ6ы}D=Ğ6r#3`%eNBX]I1Y9eQZ#,wM̙ GK?!a9> q,TjlHFo+bԫ1̈́s= ,O߿Cb,㴣Ffnl[>okwWܿoR(|GQ{R{Pb(*b98#^%Cn!c Yiz4WV%RcŊJ\Ȳ א6Bͺ\ z^ c1}Ib#)0%`%[l ыfĹ>6{ّmٷ 3X*y߶qts-6?pE^츞Ӌi[U~ʑy'ɇf)ie''F%o<A)G5,^fCl ):!7zdzŗ xrfW^bQ⢱OxYՕ{T$@+c8L6̖r8Ea'B5}ȐTl65c-x,ԿbI/U-=ydSi 궚Vc=,>UQ G !j1[PcS; 谤|]cZ~9͇ecۘ$a3(dU BYI 1x :=V Wgm)mlQsG7 }ZG|lX-oI6aEʦJƵDs!+J{-rSO&yFdOU5|(f`$N1W|e6sN{<3Tr)4e_lb}J|7Ռ7oq lSz Ϧ zt1iÕCakEQk.|By\?Ac.o=an+oGr4oۮ;wn\$ZׇhW7~  v[>鵦_p|4]t2KOfB5?ovTK ]tI3PȭͿQl.|33ߵ.xyS wY$=%#@X8C0Z Qt{wizP LsσP9*F;gl YZR ܆>A pZx)[@nӑY3scjl_/ϤflP}#tٴVG?oB[pfR# 1.T '%:ghJ\IR<[t0l=7FѐQx3sgߋ=_Z\;X"u14\Wt7, .D"s"EW̉EZ ~"ErPDD %Rp@֎"q g⮊])讬{}$\Nރ{7c*7m3p;YB->! {L+8fFzvh2mh4 >;McƎ8ZІoZpk⹯&AUS}^ d yz1z!~0̵!NK3{8=6|)Pldk@`f|j!ѭ~W[x? ) 6a >\yҜީl ]-e֔FzSn ݡ) )#T*ma؈E\;.Rjց i;oI`+"⮊O=y=])w讬B*;8wE⮊H٥Kwco+hyBQphto?ϔs}>M#Ӂ7a 2!Aԑ&s#ɄhQ !cU6Ue2tG?t%l90F <[H1%2<pMzaQ%QciUv,7Tu|Vӌ/f%ѯ^å= ~C<Y$ÁE\u0"v>g)m., FCG9m[鶭tV:ʉr(':ʉr(':ʉrVaʊJ9Y+&B!ft.e:Œ0#3:Œ0#3:Œ..9,|׬Wlb6ӪƑ<&_T# QX&m MAJuXG>e~GY|NbK@fԟ8ӕ֮vS5~|dՠ6_5;Xxymȴ;+!rg QpʀqĜQǺ1"(+1S HC'!XGP0 uBi?P$sr,8[ =#RHҧu3P.EbRXZM[nZ%BPesѧA*|*,r2ka$Ηt>hjLY`s@(FaXad4X.I5r8%QU#BcP`4Bh0/cXRCD,f%H˳LQ5ң"Sd2B3NF E_kjƐ8Mr9TjrV$HM$=Y <9x K֠L3bu<\7RiXx<A'ǐbRi,''mXl! ~奩(Bg2fu|6(,?Xʸ}hT^4"xO\I5Fɓg9ҨG`ƽNr_isO-O+Owē|߻L׸lGR0PSLpv8`G >xc</G 9#fvmj^faȜDf,/nXЀhh:< ӌΨK·d6J4zCg5`UV U+r~fv⌦8YgC,œa?M.%HՆ\G>pr懋dlUOFI{:_ՍXݍ+YVwd GQ,3h82udi+{l}\g5]qY_G-[՘WK~{J-?T 榚t^_:鷟ߝ9ק鷷߼;ž黷i#M GDXUPN;]Z]S-S|~UG^]Q6/W_ӏq_NFEYsz\8 ?L4 Mf~ Ì[RTsVxZb CV[2b1M4qOn?P{[<% Uex(XH!ai!o, Y[={g5=?G&C9d>h\ReV'7|a%T{ ܇2eW_ogĀGW +'׻m woO^>8}@4E ayK~ɍXhճЅy˝Wrn95|[X꞉EX'iCZ鐖X:Ii6s>emC١)=/o`gN~X&ݎ6 G RZ1t| SƱx Acʑj43:5pI6*a:W5T8Q& .*5DV//(P$1ʆZM͆ QtsI/oCav|imϽeFI4K)gAPidԆ[)TښR#()z0hI<+39ڞ8=c=RVӌ]}! wI} NUA/izqKqq? F{l4,0bNV[ 3\Rxv0Uc,iJ`Q&h rduJsRe+{jla4LK!xjڱv`i|<b eJHv3N?񊇜Ϣn\VPi!2䒢h$dQ D ˶C*HA5_A8aԗ~#Vӏ]="Ty#90D*`=/ƎTdFj,9JW9JBPQ`u9ikTpHFRTI~&D@#ŐB3֮&f U묦%EU//v~qmFpuF`EGVy瘘+@IE\H*ۈO&YK;v?Ի'pa:eguX̺"U?J^HJYGṖxp}LDlʆޑO#vN, UA tTAojWO`N}tUFHUb]"ӴPՋini铷4E#ٿRA.{رm=]V0$Ewu%U@@L E%"R!w.|i(ϧ_aN<>}9n|\]ՙ, %_~Q(o.,'Վ\h>h۳ jAG㇯w nbOkjt֭:}`(K5t^H\Mc MR#O0R ='9I/^;,\l="ζ\AlyInCyB.``3gql2wiQ;;+RlꅫʯTݻiE mX`N}]O69Q2@>:::g1>z> }J ,OM܄ld;?g#fH HtyIρDl<$B5x#iEkώ DF\B"G9쒂4ӈ|)[33I4+ >`H4;C^hum-{BG`><k#twn4d6ϥ.@3ڌR.$~r9q=U|3+Jk#O-AG JsPɊ0u}+#;󆙨P8΢WG3ڀ԰6QJJHo`rtb1UFvlHxXw);vb:;?2<~>d]o٧>빻ͳ{=/M_ D.Q@ThK8jQm!ĩx + Hi=VUABjq {@G.N:, y(֤&PzbvIlcΣ'͆u <n;r7d #Oyh/}s{nr\Nk<> 1u8Q>` KGk #SFI0f3_|,3zf< m *ϊz rQRoKQzL$=:>o.O`Ǟ[} Sl) aSgBMz0F}%֜H cV?̬Í~i7},:zS?FgvGpa?Axijt7w4o܍ WRF6 )E^Zv)e6gsGٔP`nwo-'iUwN-3_ݝRYR*ގ,=Eh0U_],6ۻ8;?\ݨ`p]Z'6_:ÃUyI> }*4L.WS(łIr 3ؑC[} @'6w价9hro;pEG+@愥MQѥ'R YʷGL9&b']\kO7]v3ԣ"`_nn"E.{ v}xr8XƏ&~|ٕt#.h*KqJMʄPđ"Ps5-شOZn9Aڣr4yKz9&pHd&:Pى-U q*26}q48+ QS=odou+g1╙ "U:?9#QFb_Xz\}ŇGWh3t?r36L5y03LX39-.8sK ~0+|R_LMDSNeKX(6** PT@![$io i Q$H63OdM-޵8sXw#o]UO߿,Q:>6CP8Uou bf# 2 їneHّaB04uĬeG#{. vw>Mg֑ L *Fr1k4b9x/1]ּ%W kƧ62V +EgXIj`+vE!p̣FuaD}>!e_Z8wyDե{T _2^}\C裦x/>"V4YBI.VCk?8h| 4[0CA<9.ч)Bc_ȋɈ6F:Q5l+10lӲBD >&k8mk wR ay-05=  I>-b!)vd[P[P),F[+G?#, OGqd=!zӱMDq${("Klo/p\&8.rADZWedx*u0~Lٸ/Sz.8Jw\094i$Uf9H~8#}r^΍/z/7l=49܈8 l)e<.)sd}@U9TyTcS>=/1l<:`qˢrALL&D1@8YoH7f5-m>⣼{?iZu獈6`cJ* LV;3eJp1Z&u4~0Wc# f bdR5МԀV?eIx+s$Z(3 3I`n7@2=~w, {C v=Es!!e # ۔8 Ф 6O4PKle/Rkg[{}$CC|!0P08VblkgyD-F*nKя-t8@X4 $,v2MF0͐zuV!ѓ6xHZkZ3k֑5gr@ "B#flt.6XBo }-{BGz k ((k#twώ &/><-O%H谝z!fܔra=&Ρg3و>+qkйŸ(gA(H6C*k YB5&+}b0NfBc-Zq{t1 H ksTn6yh+GW!ddn& ƿúHs _Ai0%q|$]u}::Ps<#22}QJ%]Qj 0&N;9V <{x>$v 1&V  * wr8Wx=#;˩:6cKQޯ=@fCe_wrH>|Da;Yn葞V_2TQԲcwmHgmG`^3Lbdm#ٙ8Wdmʖj6b=Dqv!kOL1xbG+pk7+-ozY_yjX mP6Hr1bhe#ݻCMu~l?8ϙIg-BJ6h#(9-$<D(#g:;U6_HeӴ%XMsl_AWʍ܂o{scmM!LitN lJӾ9CsPAӁ7UnTb:& -bR"Zt2J.]XϷmGznew>!Ec3 A9AC s&L%8Eh齳2C_nPvy{#-vDc}~踄rrW NmmPPF )gG_J&pbb9B8&8h반qMN npCR8lN\ ]:dth k&F&5k!dT_^Jy= B iEISN=23+q|ȟq'-y=|"I57!f+q 'gmЌ~l E#lF5R;R\\o-}پG-{RgJ)C(J>L2AZbPǙwQ8_@ L 9\#pMpG%(#4Ĝ *C02Wƙ٤DD+_@[x oAɷY"' >6T=o=TUl T `.Q xiϏW_?Sd,i504ԖڛCwaDDX!1PgT)(, A\'YIa}0+2,sqQq!=W yo3x1x%T0ĵ/»pm]-frd2FԶ}$7M__IbB{%}{ISPEͭ͟H7gI!}&|>]T+]J8}t5\!pߐHd(1B%;iAGT1]s97I)sdQTpz범22B#(V0K"`ȥ8Td.A N"wLb.|eݽ9Rs:>zBh^]d}؆mfo0>A>wm6*T~xV nQE s0_Rdr!JtJ M$ѐ=(UpuI0sDKe Z!t"ACLO$\|+z|\,[޳^s\9ޮStr,HT\3Wg(+՟ BR2!4FCrM"4%0(m-#bu; __]168$hHG6㻨Á!(@Y]RpeﳓFFB%&%$X~/)8)\ZT;nԵRRAI)K92DD21й\ژ zJr$AaDuªdu\W:ygKr%pUf}4NwW~JGe0ά-cB])=H<{ +H %sV4AD + ck忝WعP6sbPrA* (z.jA&sAi8%.#ݧ,oqwHtZۀ ǘfJ)@,v{Y^yW$:I`qfӼi?A<AgjV|"mE'1I -Om`1Ҩ04$73ev=e?hyJȓL}\S&kR=H()&8i;L읲Ke ǣ =<3 T(1 &jux;2='.hBi*Y!si>;; 0Z^4d:K4og͛˘FZSk/mԊ6Nm^_Bv(ip噦]bf·ƻ -+91prn EF?YblHV/IG:^6X>L}0#1"褕Q4eN~2:Mrl&*Qglԥj9rЁ3)kZ>=Wk'I59vZѩ5]gQw{i9>X~~|aݏphn $01CZCxS DͧN&ꚏdܛ席B;Le )? ~>n? ']y'ǧ%a{S.w.qPf_!͏z MhuЗo!C+Baeviܒ,+uhn) EdR-e.4 !,Y #iJTyǶ8d^ӲϷ]:<iog{@y+l*>ODjNΪ3]F{gtAi ISqbDN >p.M{WcC5q665,8iwCw^FaˇӼzaҊեa8d^ ^ph(3k|7R'dJ>0Ejz*>sڀ<@物3WރaTRvP žjC K (P>~t4޹ Y77i=2B/ /X^,/:\ ì].X^/X,[a0@8QWIstUP #dwMV5 xt&kr_|[ waLKD.MQcOCTv90m=+8GBX+t֞0HG䪌Y/#7'%Ld_Dà$VrHh2*3j5r3*5:w@:灠| (d[my[qǥzWlT%EgډRC`QkJh 2jÝKU@LV cdqDRBQ@@?""^RqB&!U[2V#gdUkۍȒBurN^xgc^ ~Y&D)7NwlI,^Z%K< Zd;*ddDUFl\8[e˅z^.Zr`de6?|྽/vKU>99/;cbB=ste;Y,^)F3ڮ7{háf-Xôi}1€S1A'8isi۱ڳUǾmffm#- /CW(/eU6t BuNBYWƨW_e]ƈT!3A>AFP;j{״7D@*sJO5Ua{sox8pm9r~'~ #V200ګQh!O5-UUdQ|2jA5)j<>QN4|жW )^cfY:~Fώ.ŋ_^lU//̼CtҾnuU׌w`xT5SH6XWJш{/ssؗ|/@a3Pw-_i?>^H{R%=65UuV8:3,AܬZN= pw|΋gy o.oFJa7Qowmn"e7k8k~+ }>9Z,&Opd?੥,hc29Q tl[avqlhs5G(ūmR.%SGl,Nx7^F}0GL,~UF >6;z+TCA>7I`2)8lPEVq١A$ZǎDD;Ǣ[ѢiMɺr={j  p:oL@IQ.W+5R˜DiY{nt 8 91ZYuUc|h>=Q.9V߸&.Yfk 2)E5P&]@WaFC.XjGGcT>b5p͞BQizU"VWJ*=Iu>߾9ɵx*6lv TP,J&Kqn7/Bާ$ͣzΛk#iJj MQ Ar#钍5 :>|8ǁMdB9α׭`1vyRrm4dlzaCKU+ܜ-/K %xTMU.hY[V+= Qsװ#Ԋ;mT/ ~ E*US`#9rN.- OFgfR|WZo iɸl "ˮFPyU7nJ|XZ,X]pl5M)`:+ VEzޚcqm` Ü`9^Hu^Q+wN d] +lBx&W:;ǎA6B@F;=5(!ȮdIc@ {- 9q2M!l-EP 0,JLh*D|0wQU1a |k)& A'b̂G 9:?%:T]|…E*Pgk>dD5 V rl`fih!\= :k¦9 )E)taIs$XBp [ZuϪ J*H{mQ0YcI?5 !Ѿ՘t1zN5> Dbtۃ4l=µ8b!z_P4yZgF(SW{3u+ԥj*&' ǎ1K=~0'$D{020`OiUpАJh+fFVd9Zf`yd1'i`ܬ`a {{[ɐLƜf^S1`u; g4 Ѽ<{:K`r@!YF[[1VH܁C@8o.̪d!:T?f}:y'*Ll;d#+NCO5 >}p9#gq1O֮2X#W`> ]{ #4Q=ԥ$mŬ>& 5y.x WH%t34k ) bL`QYg *5hv5\m4]CjA J5;ֱhfՌ1z^(ZFqσoDL"|S:W|݆im JgViiU(- R< B#;썇#rL;y8 衲},(585YW j-*E|M;jC:kˢ9;k\/Ù[ďޚདྷ"`!-7퐄M٤$@}AOj%\0F<4e_wq^z1-ׯ9׳3ɜq͂ ) 26{ 6YS@P[/vGB{gU aE㪩Ys55Ǥͨ@׶䠧[[y~{Pqab2|57jD RMwhU&8mP Zxi֚&mך0zlZ??-Д nL'=>\2 'YAi#8 N’a3[1ixzp4T(FlnT *ío1P˱ٱPdJxI'ȕ$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I II&f$Ǵ$PSIOzI (9H@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ |@NE%0-L R@6COeRA@1 d0o;IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$odԒ@1- ڨ> I9&WHH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ |@.7ޤ wRS~kb]ߋHK .mkZNp p[Lp hC. 9Y>zrRX]LCW צQ(t*)n~Gk؉[m)0jя;w-|7W0z%ٗGGgo7/l};ea 5p~no`]nDu]Vޭޛ3v"i{8uet_Ggx߾1WF?Na;W߫ju~}Edϸ7'cq ;~пY[O+ ^ū7o𯭨*motmN0.@^@nU# 2 %i Ív)h1]c0JmEc 7i1tpb%t(}zteV`ob:b:]1!]95?B5+FTn ]}"Zҽ+Lq1tp^ ]1ttAYHWIxQEGJtѹ0EyuC{v_6>8;Wy4շVZYE*΍(h}ƅi"Oxג|r1zAk>#XG^'khtyvcJB%]=)esO}0"{T i\N~r+>/(V 83no%?|lni%O. ~˪KՃ|fٛhQߚ۪4|N=$OL|s V|/ܭ?^U? zl[~x w ~Aʹ˓~a?~}ɣ3Yr꨼\nﮧl)Vׅ/TϏWg/km#GEȗn|?C63;3 `2N`E֍,i%9N[zYc5ŮfW!U9`('FA zto*</N $$ƹ JSѻeȺ\ߢifiXng3tݨٺ +~=Ch?6Vr}'p7N7z5⼿N FOe%{U.ŢS/Q]M ך.pf'tz.v:~0϶/Gêa7w(\MOmH. .՜?Η$g}ئfdF@5=&UEAtIe [!>K! qeuN2'JjZ>}/9sˉ: o\7}v/H;%.<3rRb珚?)ͥV.kQ>筇2W r_^G3x q뇭~hv݄*w5hڬl?~_?+lkĨ\Z#y?$6W+ ͥ(\\3* Zp1qv[Z8A.=[6渞S4Tr31KM-,oD=U{('bJ(*D2D M"(G>&ˢYA=*"+̦ XHհ[]ymC(0;An̯bV9դ#Di Q>gO#ͭ=eI]GGɨ TZ-IrGw}(1$9-I H\HJw^dGFRPI$%8bKDbsO]}pZPq06+%XS^ꯩ@3I2or =xbpQ9ѩhELBO955cH&$h $z.lZ@)wFEI0: !X=N.C70;j8ˍv x!4$Rq+2+|Ai,dhyIYbʨ7 _++.LX<=f劒Wq]ק!bԮysA3yuwe~^{tG<:8zd^ʕ{$yq0l199U)7K?r2<[?" ԬSP$zU^j ZMBgIEq8ϦPa1:?\=5}7v#iG Gm'ztv9>]i@:Z-}4}aG\\qSH-T9Bxuiro͕ ofVaДBK?LE3.veUmUvwnAl$;]n |6fUYޣ!8> G0bG@O_g[N5wޕw?䦻n|V-LFGHSCI}\j+)=Qrˏ՝J&tP't6{_:ߟ}{~(3goq ,N 'DXUέx=nmJ757b[ V羲#o]jyߚa1/Ww)/FAlwrDVϠ+aWPl4ilhm,˷v!1a@`Ӻp}su~c+G&zD6e%8LiiG\HFґ1yD@1荌;ᒍ4uDk8}^UϟǸb'-7Oﻱci1OH `4ZIkī!<)mؙ껜8[l(7K wvlx5;9흣jjw7F5z̴Wu1LOW94nY`|*dq|.iPZKqRR٥qӸ)#C.U#0"{Uŷ )Ey(s=,Trw\b fwC)ჱFB`xk"&QJO5:,{QבEE\A{Qw}xz5_:n~ۼdӫo?e_ ] ׮?ty˃b;`RΒm.k[6m\3քpdç!hqRnPi|? ,8(JTJ*@VX&(RPG~c;dK8HO9$N1>!X`#*qq;tKg:ಏhU#^R)~2!^C R[ɏC#gLk&&E3V;ũ@eBjD[B HgX4˧ah$.Ѕ& j1qvj< aiNy  A^xCcɶZ.>b#ʻCM8 !xtx)LZ"60,'I84QBHޓ\@ ]OFꐬgE?BǠhNJEp TiXLݞV) iơvQ}$سO׋\3oڔ`iP`t=Og/c;xJ3shQR`Ls*41a9vbRYcsÓadF6ҠW :!#"NQX RScgaXⵋiǡ^ڬݦ>Q@E8qHϧBFhj RVO(IS2:X$Fi$ȐR,xkB$XdE,Szd :jW:a1qvÚ.~1G,zD^##vqkX;(.$Ozd8=.h"`yRkOL  DEMk %DRpjҔ-UQEE59$;M3G#^_D|t^:iɡ~Q_ܺ hEHH@F#L м 8-Oաv>^2Z A6*'P+9G-!1[R^89EdIbh1qtd?H_MM-< ͳITa;OɃʈjQxFlAk'E>R˨{,R3#I% Q$yq٩.F` '@eXHǴ$ Xw*CT䈉'ͼ6jv#qО-ȑ5aI0LXFL9qbigǗ̊?,Eǒ-1j. Κ@0T[Ď:R 6>:"L |z):=?j>yڼA JfrR w/Q 1s=(3:U jl'&wxp -cJrAHBoHh+ 4.pa=p6RGqT9$ !"}ihVyʥR)Ar]):'hd[,"q}4gȱ>I\$z4 8<Yxq@shW8 "Q:(g 0vTiȡQ,e"4OqIU719/ /s $r< 6y`AdOԴ5>D5@` ZRJ,Jp*$:Gdye7-P#8>y.7AS,e醩TO=Xhj^4.Weu|VKqITYgL%lWy%$ЈuX!Xj%x'+3(R.Y$b%єF3^ɵR݁QףA'gjmtDeBy2/hŦ12k6<|5 ח j6N:z"tQ<3%N*1ʋ(hU&;*_xa;HH'Q!Ai6>YyH$N))!9%ҧjiF@J% + q.p $)}v6 HZ%0(G^_&<8!m/tLH[fgϔoXSgc&D(h$A'>VgT4 ԳwŵϦީ֒PzNwNzFkb]>we湸,PttWޙYF2 Fz1lTKX *\Y#ѧa|3ok{Ɠ3稍BppQV^h߸w=/hXsˋ=}yzxyCwo^z+k o^y[əb c:הF SHaHyt(1 chm!P߹ϣx13ڴx]}޵u#"iw" 0b`yYl0ݰ,H}3vZ6P~ґm>%Gi%~lWo^e2@Anmh00N&Gxo= ,Ғx~U!v`ޏ0;6q7]X"ibzF-Z.J=O>vzU_ޮblzQ<4Kg3\CCgb2z=.gx3bR-×e8%p~Zs1!1ܖ4@J9<w btnR`Fy,0XJZC+29ˑ`aN h%Q.E+Cj<Uz^]NiKE"ZY%TEN$d͵&ztRE} N_VNރTK]J\髁GR%P hY1ys4t4%PiDH!{P'`RFDZp^$I" )% %Iߘt[!*CF1s =;瘼F`ّ0.1"3`v.[e9:oգ tZRW't:f8EfQGk]6@$P%s=yHdLMNHf)SE~[׎ e3mnv@A^G9{jw3㟾f 910m:HNv^ iߒU{jLo/f#4`)E9>N$@1bj.z& z o7Vtk)pm8xS}:PƤ ^luӢ"b  WIN5&QDb'UMy(v &Uy;۶yEI#qZlz"ťnQ3S;y rKr+o,dB琅zғN}j[/uZұ˵j.i@eM #!E QFg9ACZD:3a,BZIk d3%>L]}6H8-co} 4]Q2o'LJ}C~m>y?NNz>㞖*wuEr赾OFB UIP` =şo tܯ@%_O=Ty-m{!ZLq2??H9d s 8Xr> c<eu` quqGs?n|=\[)>|#7X.S*;f7IkLNzg\1*)0@V,Eyr J6K={Sqe~iT4s ALM-.}mM{8E.Y;w6 Iehb3ă3ؑp^ǙR ƽJ%a(O\bQ fn/ H +Vř6;a#'f3'#W$ 7d_2B謏@EJ̕qf5qvWP m 0Rr9V빡*?TUl0P 4BxT#om6F[yk(i6fyk#om6F[yk<6F[yk#om䭍6JZ 3Kg-E4vMxBkFKȬ?р/]HtSBvmGO6kdW~g)S=Kr5,AM[C~Y??Gy60zoz[|zv1ه4eW~5Cۧ"羆|MaCe._&rz1e\hkC[FI,?'g@Yn#p#~ρ]n)nbw(m|9ƜcՖ:96܃}~`Ľġ..t]e \\a(/%()ҽ4tn*pۅ|:ݤ{)m}zoG@cD_vՇZ-jog~K}ӏN̕7jt#j{co[wvDqqz=[ZXx.FK8osFW;׉bV6:HŜvR[B I+3x?EQMbUUb{yך/Xi!X¡^d4ؘr2dcJq(PbP(3ӶPsa1ͷc1hS) :{mdJKL$()^QrM^KņKRK"J ӞG(|DZMF+m1Zi"h)tJBx0Q5ͭ'3_I)IhFjSʖ˼ TL0&(AXviP7@n[#&!-DAdelRBCN=I欬gQziD-! dC(Yvt2g'|)X*2F2k*PSn^_ %?$nvw:r]95st;G7@HG﹭uCp5n뤶NZ+iG$2i78X&k dK%XOs6X2)* 9dpT1'PmN]#!x7-XU ov<%C(LC1'J^fzFz~*Wmea# ShU<;8 zS5hGv!Myq@@35&뤢/ ^r=(Qr2qS1ՃMV%^ Eߥ[**tWqb#*HbgQANq7;GhAU(=oa1Ҳ|GcZxcFԀ{1>lb,}B୍XqB6:Z9$7~J!hu@ćp:P]} [ q OAwi2vݒDoGAov2>hͳUޑ[GC;2ӍU˽%5EWNkXڀ]5p V]\iT/!]N_B W5DBLΘخ-@-ER`h׽⑟=7YERVk"DA; by uT2 :@]ΪAgz ]R>dc.D='gLq&[mV6d;Eto?jXQvi96tBg@GWe;7qI$ȱXݑ78;X?7F׃ &-Q!^@7aYHȑGexK3F R\H\@Jbm3dZ X a1&d=Ki\F5qD+(Z?J[_|Yn+?!l煠BQWL[vx6pR79q]::ﴶqҿiJ|o0xܔ̘6G,)o5HhUS[谤^  a|:f=%=ٺ\H$fŕއ`%Xz9ADͶft1ul$>Tum!ة~*;6'4NBW;?>#H/1)BeD/ 40n7t*ugӺ$N. G9dgtr:(eLN #5FbDĂZia0bY98/ (/h|c~z*ILNc@F1~.L4VDJ-ș0<9L)"S򶣼SB\;wS kr,7yB4lwrXa+wξy"74bJ cSUzG-P:[CUW ̂HLWQ$tuO>;wM5 WNW!W vhzۡ5 J1@,I2,qXޫ7KNr*2,8$%v79A>w*^2MmMC\l~(kBIRBסW7SՔu8=5+wp~ʹMߌ'xYaF}&<}nz޽դsmm*8+JE|!B.Ac.o}@j]QKi{xK6ۋnͺCjYQjZwvyx{Sz^jy= ͮn%˚n '[:N//3طtKsvF d&9_yL#\:'Mѡ˖[g?mέCK??b"sON]\Q4) M"+=/s*PUlv?T۝,M0BJ (PY*,̵[KDZ %7ReY }8!^rV td\˶{khh82ћըq*ۮov{1kVi|L|h,d'Lb;|YhK"=ZaЅsv>-RpPRM$@`iH09* ]!mFp" )U򭉳]Y/@e_{Y,ǻmƔ\oWtY1)S-՟|RWF}eU$6%R¨찭,88G _ijo #O"ĔV%c`v.[!*A/TY`d 01uԱ]o|w%'5e55|.AJ&9SB*.|P>[P ,3:뙍AiA}_$wYȍQ3֬eN)J%QU#hk)E03`K[YwZBD l)@<;S)FzԞUN&5c[忟U8P6RYy!JR٠'ja,ytH*Һtiaqw4N JIx9 8c!sŤB @%mXl.JN6]9پ:aN~9Ҏ GMqzn{Vc9IoNڂGHH< >FEW>=q͟W\fW꽧Lj~%.{_ek\2#d )ƁƏљxht1/BחiC=?ɐcx[ %,JQlur*! ]v̬Yv/L񏋋U|YzѮT1-mU{Suĸ8BQGU#+8Rq "Jp{B\[s@nA琍2ghdkC)\ ehЋ)wf sR:l dR[BD">d7,!9Ĕq?$D1\ehi43:I:ʥa:(qx& .*<+(%Že %1\,IR0Dzbˊښ8JjDG_;=wE;J}}"=A;pT~<(ILob)@YP*9ɕc=we!Q;e"&C`6ecIoRv'$dy%}&j[g;2UZV}B WN 3~}HaoqcoTb"gdsjj rdJ(QYK?-"h S"Kbę ^`Sh }d2v:I9bײmkKdb j:ڼejw vӊ"1/&#P¬;+e g@"r*&yrYiCU B&f($dQsD K !eX$o;Xklڨ/~)3 ">eD"vO4j!ý%V)(;Z"3Rcdɉ1s@"VX946 dU΄Qhc$K#b~ƶ}[g;"~xVA:[[%E2.;\ܸepo2ŵ;ļ^'([}Q36pqx0mnpbgq ʭlrqZLiU\ {~8cvk]?W7TɛW'ouG?pt0ݻ I-^#|#?0iŞCerQ{-h/9vR0|%gS4aoVL bڒ\RD-#&*9{*$f/H]%9pNJRUvק۫+R!ՏtgNYx'`82 ;? S:opX~A6_m/F&jޟl@e1G_9n1Jʃ,D` B%vSB.!WzQ VlG%6rhm`Ž Lj[XTLĒhEi˴*HVs#z!Uxd!R RjʈhA #(H8c[$FΚ%Fָ7{T/nIo|4uk"NlQWX hw|a&Uz{g'OUEJ РIc̴*Ao0f&V8D C0Gs&n7ҁu!EРC6ʌ ;6`#*$;P4 Tѡ[#g/^Il}bku#_r2bi+rVdh"uMr]\o+wn/ȥ"$Chyש 5>2ோ[}.ZUr+:mܓu,;VYeƫ;yr~H_sӧuϯUx֝]NvXfnK`oYs:iuw mmweY i[͟7!Y32kFfȬ5#fd֌̚-XZ,̚/G22kFfȬ5#fd֌̚Y32kFfh5#fd֌̚Y32kFfȬ5k̚Y32kFf1#fd֌̚Y^Ȭ5#fd֌̚Y32G? f{kxUMghcaaY E(gi3?2v R8]ul ˤQkLhjG^/! d%;(xV*+0b֚B! ރ nNe#Ǎi#C--Q6,8Qh0:6u j82A \\LG'oG:0!S&aJ;f2Rʃp-aK^jFm5r ɼզ{~Y땍 Nt][ |T=~^N!䏧GYiox-z4Wv_`ZR 3E8ƄMTAAы"  R΂mWl0 ]Sܶ=-^vF*#R"%@TD2'Tfi%1 ,$ =VK= P̞1IXs Z/IbW!rbu e J| -k,aLJ ̇Y)HZ8MO4`ppx,qp9겢[͜4qa*|,;[窟IYtp!QM)]ָ( xh\8=;<)J0ΒcթЌnP_%P0 8+:m1P(XRI%X;qO~?v: FM:"tT$*ʾVWjhJb R[ 9!Uq9Q)ftzgJν9խ;UtTUO>]Mg Qf\~uΫnmgH^MZMvq{!4#1x˦aH0Z43,' >LvAw3]ݘd|$FmdNECHm0*mƠb'{CT7szS<Ϫnzq9_~>'NO>bNӓO` $Tq]) mG߂CZ547*Ќ9z9qwmQw};v)DfzZ^񤛥]n2~1ॺ㎫(p?a3,]m>?sp$8Rpw0O}TXqP|ʽ8L{Ol[nj۝XBzyܤS*)=O$⅐ӌ{_kަ1JGc=`7WȺ{SUxL 0+*7lٜ~&sZkvfRMˎ?(1v g6l)*bb,(U04V0`qlKmcK+nſSǭ̡_ד&xKIt3e!͑8#h VxDcyfrV;+׀W 6z\0Pٞ2gsR+{+7li5#|G\Οd'W ./`(lG.O_?؎ZwKd;* `\\ (;P=rnķ-ʶ:xScAy`S-͂gl,xPBA,ǚV8 o)K岈0+T[ۿfBb8|)7}7=0^6TTV E7%M/ Ҏ_n X^\ Cs D,@g0m4FB}rNcL-Hv/lCV Ϡ\! ؁LSL y QNuHjZĒE*L'Hk`HJ8'%rb΂wDðqmF۬kN\ Hz ;_tRw@$ioK&տnO_JHt— YNHT9 \ŽL| W)FɬxQ-&odW@w/ݹ""Ӵaֵ?6?+'/+fw32pԽ Nhp$r TZ+`no;6~={Lz\䓿37KevfDD~>8^gG7QSZ}DpiidiD. u*ZkR 6SFHqgD+= /ųEbK88k0b+L8XPJ LJU ZQz]<+N{kavۭN%Z ^WqU۪] IsRJ!~NӇ)}+ۯ,蓌mprdl!%0CE(6,ls.i'bω[UHy.Z[ x!|##-pgl #$*'rC]T{hT$K$tBI (G+T4hTʺ`R̂/)fיA"gt>A\V`sѽ0q{DL/w6GA3Rܣh32 X`Y5JcJ! ǔ;pYr[=LBmIq 좄+a aF8n=GnD2Q54JbA2Gʹ2(* ȳF޵đ$?{'mAӜg<ڝj rV#B D>NZ@=Yol(gW R߯ ZcRzjQ.*J0At)RT(mJH "Ч|x.P'zr6!nh o&iSvSjӣN+XuO#BhF),+.{ȂUPыتx♔`xeZe}=%$s:0WKb*|=zD)55_l[76@< V;(B4*TsA"Sتˉ,xT! OɫiDşV7,s-7~.y/)[`wGs9=gsϣl Fވ^]e917'ggzy:Zzz[8G>Pk34 z|1JF2 :Үviӎi[t=t~[ArRL- N:)Tio Kus2 mTZ[:(&_|HWLqجDIB.*%sJ߻sk撆~&5dL.u(s 1[ /`?E\mոf9Ќ}8P$|CdMD%Z d9̇zp[36y{ y'ZǙZ+!,Hj&`krN'ΘSX`!da83P%C`!$I2L _LJ[ЀDp`Eӥg9Իm3(VH?\tMB Dza`ص#- Q%9(Il3)b ɒ/ZKM 0Z]"8J={=9[T!ؼ:afa\2FFȚq0QCi׳G7r6Ds,rNO&]{Тak*[pR}^~+me]ONN>;[_!Ff5\ T t{0a2.uEEkfy:dz[:]=vͻiz/smZ˛a:3oq(84-#d|5[Ý]{ k.tS6bv.|n Fc/tʻ{B٣8#v 3тwNLj}ݷvgKl"҂&oIJVSNLdMr$ƼD:o|eT!-!!A6S`A$ E("lFΆ4AnNO\#ܻ'Ӄ "|ann. Yl?o: neQ ;JfZ:T'5jMfWJ:5.o^C:Cb ƒH @zͬhXJ}荜%Z 业kvurrަv`Y>eCK<#2#T,0&hJ#RvI=ёVܺsFlu'Z#@ JB2l-Qb,A; "K*ChBjBeqc/}Mm(;x- u(ņ^6l2`.X)diR`M$TRD:{l_~E;;R!4@Ҁ)!c5(dMyT[%z}VQSڧ1ͳ FeWl)RJ%jb0۔9Ph+'ĭ-}lD% QbXFǨc[0$0K!8^`TATa~>:rZ<( $JI(+uZ''M΋ *ڍTF?cӠNy6tvpsl\]lO6w1k'A>0N _/t5,>-y?A:0FlEÆ؆ΪgVN$[;B휈}ϳ/uށٌ씕fqo;8A`JI %(Y@ד+r' ﵾ^"VtV Fczv|t"Y_ZMEjťF{l0,uf'oԊ\nZ=ڔ~?=/!4ݻR-ntGսn_w޴7~8;}z]C Ys `|nu5 Ӷ0^M_!Fs$ck=Ltì3[DQ/'zv1 UG]>dרMP9৥ԁ-ϟ&\>* ޕ;r4\cmV:uwv0僓̎/o~z㛟Ϸ?=~gr-Eš+ ޏ{^CC`6ڨ>z1ᛌk|q.m ڃZ[[藳O?ǃ$ }Q7[.Vy0 0.dkFyEK՟Rx [C tCBu>ƸF&N[<$ZhHY3gSCiRJ#o]D5(i J$Y;K״y24+k&bs&18%wRB~e_gFg'dy_k'.ƞ=/~|ǽ<pV ;O-ylwUŭWt|{hjxxc &V˜?yV/3[|[(L2#e&a" 7?#:T&0$=%H! )vϠrQ=^[hk_Hi#pgW5Gs(+.McRMt66De5〲Ӗǃkuu[q궤L)> 9hYBH0#X9Z::H)`.!cV:T6exl( QXE5~$"5CNxуҰ*+ *ᣪDұֳXY?!A.'XHu;˰ !,s2ޔa)M}{o|rGîd/k{_[o[/dB6,d @$k1e"mdPŀDngW/%%,"hXP"=1ɚDZ][o7+==H,ފ4`Cr5,`y,oG)Gi%ͮf>~ŮK( R8; jUaa/B[,T0bAN2˫ʌ_<,4yx pvv˖>1bcU)2J ls쬳,Tfr 2.q:e`nG*ZiP*{!IbJVThkؗOXض3OY{[(W-4ǂfǾ-Q{df|,@̼AJe;5YZ/% b)Pb@E+"[΀f !hIedIYY:)\^D$ꌡuflިZHzzz,l~싈1"GD'JViI% D&TeNm&* FP㘘E6w ;ͦ%.A%lU2lT )5POj;L}WǴl싋1.G\ܸd2SatV;]((Z\խ>d%G\ph;[͎}6CVW*>SUF69ޏ"~JS*<|>g}z\p|<`I)ݭnd%Bd.׳տzϾ “'@>eJC~ 1C:4xba[ Ͱ|asR0s^~ՓɢÙ7=.o|Q˳5W|/^YňQ<ju.x 2Ɇ'̒9:*4{EYJI#[ RM44JZ$Ss"%KVY3q48"׬ ugz=g&Ow+=[bs7UKgjzys|u52fE]Z4MPg[&@ Ց Gi3'&=*juѧb-zĖj>HAy Ow/Aό_'֐&i|JJJ䬴JV`4<9im6M2{5 (K\{eK-qdhP_DZ[vqܰو/C?v'9}ȗ8*@I"TH}5=I(}1 3JFvBr$Q;^UvGǚJ)EK-NDK "arR>D_P@_= '4-v ;M,:m'UT8mor|QET śeRf'\tVw/Ulz羲İX:\x0 ~Qd ۳ozM>E>,'7ʝ, gvw~>z}۽ܥGq7g]z N6 ھn}?0Ľu].W~꒸8C w< '[Rcn`=MmpF0WMMaޫWz~=sn$̻{f>9ܓ.'zPmءE`, j~I)c]diYWz>o<nwsŊ\h[02r)-Қm4~IJBu=l /$EY$kHyJ8(-nklo]&#QߞNOQ > kQPfH % luF4"(Q#rv" ; ƞ9?#mycW0ꞡT#:F?SʺǬhii\,)#Bt|([q}!ppAssKA፤~9&}/ ̥+G$`8ڧLP5IWHU{dtS:D!SXmK.+}̮zfD2lufP1,oPӵmy^_N Kf/ zJ)}yR57Y~-PsfduO_/.7gf5Ou.yT"[FhZPw$Y6rK ͨ4cXig/,&g^`"J2s!`+EeӰ2:5ZAP>{<38]$EK/]Fyd1N W\xQJY3qvԳ^~laV'ХKK P)S.A3w #A.h0 W }b1 Y*R?8g~ nG@I#6t1>QiB#jFlF;FyJfD\(ºb A+x )P M>5{~[ni)v2-I(6,eqlU!;faVXLL%`$aL&k;e6aٮf$})rlʷKvFP ^KW$HfSتvA-XvH,lp|ㆌuT W; gׅmܤx(ov@Du;Y:$&lVNû))͗TZ.ggcOhb̲JΣ9'ap)QG?H(}q\C|͊s<ȡb9:&3jQ>YR-UAMLYZ* jPq6}֋z̓R&Օ=%~2{5rs||ݫ_~XU‹ͳ/-VO5fuv'-ڗh'<[b]%1#_}:5" ];]@w.Y)b}ؘ}<}cNkASʈhL251AKH.D`tJ%8fR%l5I4PFK>$+rvN (F&9Ih;w՜׆e}U]u*,Sdq#xXUN]|!t:i<]#W O'XCk9wL%{oH"fq^Z6]#6@N;v>.P{ rǂeely+zx+Y4]1Դـs$)$ &LdDE%BF+Ъiq/Ebg\_?VCf֮JUc[v[{&lP/MflN|L)¬E(>a ra5as\bt;`'~ EiFtBe)׺x*`6fL9Cy[hTZ Q9/ZZ@1:JEc!X+m7\Zs;44[/1XC6$C`&\| AQpW%xy]x,]Z'=]0K "G*ZZwR̠^rҘt WxJu;flQѻ4! 7LqV?p5wNF_F&| A!Y-Dw^®˳qPޱh')L^ U&-1ZwicCZXcw sGEs.P\ lYu'9o0^ ªY d~7*p 3TX 1%E `RH5l% fĵl}C=E8oEqL`6Dul1y$%382Ҁ3{v Jƀ҂.RE%HStFz٣7z3rktAGP:$}FKegFu?vV;xcɶBFW(gIly޷a/0(gP 89OFp51gbLFe㄰:+P=IIHawkNO &Q|puUwYOhAc$+ -"&%F_#[Ƌ#ɕ'^\~HU=yi m.ΏzT`p)`8MR>X$a`iy*\q\VʠS'Ϭ9^kƱv)).ThQPHyQ'(X\EQ- UzrBz9ʓ\Zo'>G$2kd;MS_fxX:ⷖk< ֎'4lutwCq ljxnz6z*˨UH5]>.a(ʁ,_^?Rp|of|:o>9oQ q#n zPF%_(-3?q~MY$B_О.HE!(P!BNj5Zrq]I ,Vc6ۃyw9u|'Yp @dkJ6-yGpwk-kKy4[qװK]?sԞ9^^>] hco bo];ti]GGl1djNGs tSLփ, ( "w,XZjhیC6Csm҃;2.BڵHhztI}5z6`klMR ;Up R&.UQ qA9^C(8R.oux}ۇoTGϊ0`J`8iyBrLTfD{@D;T,)[ (ႰȄ( E䭍kHx$UJ69IyqɃ[ee0d<䶶1 ^RΈ\H3pe"\qg3rǝq|5w<ݮiGU0ʢ%")d2=7` +_@((*=*R[+|BVҮ ":G1(IYZd̄d(S~Z]; )#)QorLR,wY*9ʜbB) U?M#Ip(}*wKEMk!C*l2:28Jh_târg;Maɘx2'=ٺH\>, O$vyd)e&|^j}y}h:78w3ZMJk )'w~>_{h C T!7 {łޕq,2?zƒW[/6(Ag}\Q$J{R5Hy XgG˔#"ᯠoϴ//tϚ?՞idTQKp1<ʷ2 &0@6VCyg`*i I'|?]%$hVtv;ba|{m%璴$Wμmy 'iL ĵ(am٦MUXQBg8/W~r0 vxƿ z|ۜ :73A-6:EHɚj$12ߙnެFevh|j=uaUF,Y!\i0u2|hǓrKʱC5S>9Fi QZSG'yu ; XCC*#!6x%K:M !/xO7dTfqhG\jN58J@$Z7 ()&=^Gm7 \1g&H5F~dư)f BǓ$/ެzftjj*Pn{9 4Fb4sFcG3I1\Ǖ0b@s %fW4d exTH'K!ANҁi]ItȹmT]AVǦMFmҢv+Z6bBVVPjъI 1p=+]܊m;`"aBBAC3Т  DIFc}9P4M4Fx!o]AǦHFD"bV|p2EZ.V=(y5Kiѥ[" 4 #j5P`+V DIƾ'C,zY|nbEƜjCnђiV3rcaInscx)j,7Qq E/"(p2H9˅lZ3Uƶ=wv* wD@Hd 3âҌ;$8"` 'Lܟ}'{(? >gn1+~nf]kKl]N ʣG"%  ʂN!,tsjiF6^G| Fn(LTA Jz15J*W20u/j遌5ZjATq.MHC +եx2lz.x=. gwf\o^v|Ɩ.H`>L~f в$_wt-6X΀}3FQL;y׽hf3^9V Z괓Z]WrFɈ<|Tdd >‡WV8+儭Tӫ^3 <;{~w9&O޽]qv\:\FVmJO#?[M4jڛ6M[iFu=˛vovBq53o{Mg8zÖ Zq' ] {ӯY[N'"STi.}2f2!>u:uiAz<}/COB"JG *ҁA){JN@a&c'=a ye?v噖Dt~@ NIfqRpw0o}TXB艓N#:0 v&5anMϦs6HvsXպ<Œ/O_8Oy^y$X8)Ч%bk]IEN>“wft60x!\W#Xŧt'H"W-I~zqԒ-S; I^`Ȕ dnf>˛5rf~['f_UG>[wQU~\ L'j;ɰ7(N?/*Iu]D=Pj,谀X\66#u0Ivz_O=xd6wu=O{12=ofېflC 3 O8>i$&SmF29g7NJ|~ 91OLbfK;Bm7,}iy |;Oo)|1ļ~魑*t霳OWǃfSa-y/X_Soβ~:.Z/˜qQi4C^0 qn3gҜ7:эXdORgߠB0y~zQe~?0*ܭ <`&c0ὁD.fWZvB=+ X7pȥb_*Qw^JT.$n+ʤ tKT \%j9uJT wW#IU"뽁DJԪWZpHGpSloLZ!v]}psjhVR.ktU))| #%aǜ#Z/?*qUhvarPd-H|>S21_zѸFGg}qι`1X9fb~A穹1E:X6PU1  ij'1^(47m_޿99^r-# dRKJ1Hf%QU'O$#bUbzQ|)6*3Pս<8f(` G$>U.O17hde`nr}l0YӘXnL>p@wc{j-Ill2))f#(L$g0& h/ZKEimDV.#9CD #OhD dH)"uIj5 vCofΖ]x`!unNZ9r}YxSɼ6riS: ͧg=lP=}a7o^yn y#W۟vř?|eřo [y۵>p%W٣&ӝв5Wmk5;4";1Hd-i䎙[ұKzdGQ,7|]Wݸ>y}pvzg rwvL҉xX8he(ܽs AZZRX6K)u$B:["@m)чFC}o/lN eW3ʨg˥bݦ8e{f{MQ7E0 jo<DZ41P:J!V16` .& -AX0K,T(JYb*$q73g8{Yתث9n)g/Y]Kv.ڥ[tknҭ]JKv]Kv.ڥ[tknҭ]Kv.ڥ[tknҭ]Kv.ڥ[tk/y>Jwn~)?Z"1@tCT%bœStdb{=% eN8g;|nt#꼫^.WrຶPmhAղD$a0(b`F_1&'JdE(\K-!e\U]}sKN/ix̓i|2 +)dr|Gޝ=ųד5~߲fJ^_pI= Gru{>j[NNgmە k綆\3ю kdۙ 1-Zock,HYcEo0=?,L0`RCtd2Rw%uzp}Z=S-w, DA&(4bAŌw)ZwZt^9z6=e53! `J(j$嬕^b#f(‰Ӄp*+M5ᣪDqHֽXYJjuy6A╮_g{q63@͟jwaƴ4};f~ n®tk׵Q֋30xOJ1:6El. #(dq6X" TE@HEKГ)P>*Ac#|NN&( R9#c; yƮX(b _Tf~8yN2ŀӯӣGlNPy(Ie336d!v)m:b6dQiUaS;O6O^ v"DI1 Z#v3sFl<=I7RCAfޱ+jƨ:jwHb%T.+ 1o٭E)9+`)1 @Qm+u3` 4EΞ&eANe$Tz$ufjR91(ŃYlkn7<5TWz_)MEp>0ռ䆔,$30vηS$K < SJ/O!ډ_3|¶; [^~M﫿1/qgo)MK6lVY2ϳQd]5iɝdvnKԺKov$SQUL*8Z9 }GiMNU#T /_,۫ lh]Aw9\#:yr9왿j~吗ܜݦc޷ ~NEmlVsRq =ǻNF00XBqJ'=%z=upjv(MpיӼ-}i.-+0RV~}̌3IB>v_..&,>_|u[~|sOC#6,>N[4qƜ,NqyL`::YT{/f'~o'ɋRפ0MxfWE%O'?^̮Leb{yT&3\?z*OO_x2Y^EEf6?t/#T_Y9Լ/#XY7"Lw;v~TaΉ77O~Vk7|ҝpp'W<$ƿ={Nk>m<`3͗/RHȸ8V ;dAFJUn!؅a-KqN*Vj'xP@Ĕ]KrmºyM{\ݥ>mD!|(J`E9JkTA IOB} aL \Ђo=gvPhT$IeS8#}aZľҞ1e`'03 \)\FN^+k/2T9 *MK'TL<% ȑi۵xo~ ߫OZ]Bԯ*Ƹ)7Ϗꄭmڶڛ{}K&T+ɐN;Rֆvttv^g>K fzb?7D[Xx6уA)(M߽R"-Ȁ/8YUUi%1?ICP@(Q#+BIhKRimtd&X8#]1VKJ@Qw^ZK 2aJtkA^ g!lXsߟ_]w 'N}^|.I}*g/ H=DJ''K#c'={ή|;N=vGw {ОFG0rt )SBD)Mk81!(NЫ% G}^X̓~ bMn?ZJ2QufwR t)7klRKbNyҾfU,V1 |< Ra| hN~hbd"c'TbNe"Xj/Sщ^ŭ;4^j/0䄊p*f?Nppɿ]Ua~2˱'7\t= ]w4S3k)p~zvVK"&N8IӪ3|:ptRNg_kH 1_.R6zsr%VҌ2#VWǼ$( G vB4ͪ_04' NFۛer~n$FG: G&4Д~8]̇5R¸`T=$5{z~T5M\j~|9s89W'}RwL/{Lbb'>WcN6-wܢw,qהSʮ)bMSrL)Rk~HСbـjixl(;<ڧpM #vs\*\zLy=>y=nܹ 5]Zȷyv63ݽiQۇm$7>MPT-U5gy{&163MfVUS$wlxP7x0 XrCD r4ѐfڭ4nxAH#)S LK#xH#8uyg:4)l(4" ٞeЗ3Q߀oyE."A9N$ +uKJI A! J/q'^a%zX_2]jQ}(:5kR@] ^Y*̈́j>-"r(SF,X"fіBh0AO}T#V=j>-sV8=A5(֩(YØÜFWHhK \6=y!IeB] #5;Q`)ıH1WO`oYN3]V2tԒD$'SC3$P(o.PJCL \|;K0I$1 _QcQe)A@x/GgCwoM,)#)G#c--LGx\ԋʧ69cġplUh"(8. åXK"2E$TKQ25m"X_T>Z*(g7c_2qS iID^8.0}qpe6Z ca{cd%S4We^]Vnp30 6lb~c )&( bߠtUidWl~0<ձ4x&4"+\L7 ~7ys:;{a;rz}V*wKm.B 䢢L U68PᕜU 9UltTGG\pBڤ m0n{wIh3t5=>Ky,* pmxI~}\;Gm}@ڻeu~su=j{(^{[ԉK`^c׷yAXn<'r: h,k`W;Mpv67R!J --% 4K5yޞ:ڳ1"!DBPF-Eǘam`F+1xN&%0۬M++EmwbSiRȉ"Pʼ4\R xjYx|!}%XC.VwXbWcap3c@)¼c/7&qU8uPL Jժbt~zR]QQ2G'>?DY(]2lYi5V%DEAk` moBiTW&=#Ҥs0e7=f/їXWL y ?\>J3Ώí#X%^|ש^q5aGĦ5.npt/FM$4!+I 3 ˺aU2x*QV8q|}~Zתu|P4\)*Ac/vf z]\Xb]|P7Wӛ#t1\-Ipe?,.\4a#kE^>%kDs=T:5ťklY G)0xQ 2;Jx;McCa *IU=? =@-+eW`YguLojr'0ٲHp.3Ԟ2uaQU5ߍO:GVlGh lyp(# /3SGwVI)5GH+K,_?!c7‭ KĖPƽuH ÌМӧx_@wϫ"_zUۛ}X'!6xjϿgaagêp%Z#&;@'*=3>p8gӌ񁳤d|]3>pY2>p&#3f.πp3nf 73f̀p7 12Wvt2W8_+|e"M=͈ +#0d2CF` HPs.?b\쿯R~?XghUn2MQ]_u8K vKsM@ރ]Ov䪧\olEFtԶQ26U/PNO),lL Kuere6:ѫƈ A1\H Rp. mc>z=W{iꚘާv:*՗_WE=Qs}u >Pݢ>JY|wo:4 jE2n|\AXLsRS-i0^AJEjIrǒa-J6:HmUӛ|z7 = {}9FK={*/KwS0%EJ!sd=wSHԔ1тFQ4`pHwW^N'_n^H%܇"nv% g޵Bf󥹾TYҲ={5eY [*9LMQKsT1J0gj=,$_9;9ޕ4Oðݒ . TIzIF^zvޤʘYkJ*%BA޻IQ|-hEH Z#<}G+P<5<#mnjo-\n`O=es㳬_æG#ssƻHǘPc h (N*er an~ض׻ HwD@ h'Tfi%SRy={AC"6 'ana[R/QS#[gaͼ${2sk butȧB2VON.\cBDy%CY.M0U*I-(q*Υ i21`'`#qA%/$riYS&`PLeK"4 \gڛHW?;GCp\xϙ0޸%z!ifH Dy.UeWg*3++kwk+yNè}kl|?ָ( ðo\[+ſ^v5LT>+n(ZDE+ʝRl+0S "}{*a1uf8 s;^솎.{J`pVtb0P& -,I%SrLH۝T>I%1g|)s;WET@ɧ Z+ͥJb )l׭ .Lm= 08V*a{uQ7O5X>wU7ތGgͅYl1 lJ̅N[vA}?U3ӝNn't_3$ґ_<ŲaaX:, U3f1nj21' G%Q'lԥ*ar9oK]_H=^ycU;:;kwۅkπ{~9߃ON޼=:D}Xu30 .%_F VKAkV&CSVOL*򜏼dAvӵ J/o߻ ծGl^51$)DH.qā: ?B<*oS2RLǭ 䕕NgتmLK"]:6?s`p$8)x tu0w}TX;艓},:0ɾK#fksavVpO-kM=fwg#4Kwn S0&~4֋ ^B}0vY4a Y@ L%\TknEXe!\v?@ȼߝ`j5,saҷۭ4-Oz7*\xXvm Z Ud%CZ8^r IVzR:NƕґTJGnDh4(- )R:F9 XEC*@#!mI@&UّՋ !_x2rKcH "%k`M@)4b2JD"f$a < &EfAH$i;poDel/4I)՛a'kzif;ћx<_퇧0Qڳ{Q0Gr a%f` {Hm2DSq͊B})n`P>RLzKo!8gĜ s?2Uaa6X q^,xOs F-&b 51O~ݠ4Ns! 4Fb4`sFcG3I1\Ǖ0b@s %Y*LC2'ZK(IIt"3h/#vHiГt`ZDRʌȹmgLMAlܱ.j̨M`x Qd!+NYa 1H=KñQqL$y#",H1 4Cv4H$$Yҩd#Fu&wl܏Q"TD""͌t[D3N ʈ%jZep(wH2FΥh"fC #$`D,g($^2*gN ؐIj$N;YWmg%"όl[\ӣ%@% .'8 NS;:rKM%M>'GS;sqǺx(2X2}?TĻя)SO]vgJXM7i/`̕j%q6AG_2CisDb*|B>{݋dx^|ĸd#!QP 0) a8ME%XTLfVPܼ âޮ(Xb _vo,V R`3n,{ )EH" &JA*: Ǩ)ĭyKp3"ا`i _ XZY"0l,ȹ_ZW~aN1SF^.nݟ>#w6KeflmdJ,9ڮ+0:IyRɀ17.9^U?_&_}+ha*p} GFONa£U<8jS;98*L^0(eiKB%g=GԄ2^jNc*\H:»h-Ц鼭mT]OtZv&yX+lhZŇRו;o5w+'dɋ-j"|g4t5`bAQ/:O h{3ԥ[7W`J*X?kQfշGzV-)?V3kVi7%3IQen|՛jv&KrC*1 35"O `NɄt 83rxO1 1G"LS" l"p&$hlS6 ['Y3 JXU%2CqH}D=H)mٵB4 2ē7nw7aϝ8E0 !]U8Itt;e ;ˆjʈ-?%: Ge:`2QR "簍G /"d)P)P|XbM`.LO.#u\|  ܾ~TnUV~bb Np^]khjͭjZߊ8]窒HIaio.#ýhҠ-#C臓1:u*'1ALG ?MӣNַ#‹knW_%=8{W \j:y+ 1aj9=YgjAr57'UXs{YGL'wMҾj}/e]ʵ˺]6:oUFYf*Su}}l%/;~~l 5ك7Iʒ+L^w=OGvH,Sc]GڇhƝQ`=F@Dp>Lu+ k]:(K,Tq.0 '%-)iBuvgwXr8~NϝP+Ű!i`6:e \ Z im̢(DZ#4}9^[aZu臘YAosNNRhSUj )T|I]]B;owG^ [o͛I{]V1}a ].tU?v~O1v&&]>@;C7uUws M4TLc(u[;S98[`ݳI$ ~*EstLQ'ίgɷډKvSe} }j;W'of2Z8EO6+Yt+&{W&ٟCַ[e՜]/X#;`N?Wuw)Z4JBa<@u0|d_c'EfZY+n ?Ns_Eζ62IkΧ,4+ox䅪]u|973|wLIh<'+ wh|{ F[&m>1IMI%ԴI5 Մ1װfUgfLюc;>MDpHo:@pĚiid ~. ϵL4E9X(Aha8ϲ9ܰ@Ozfvf+[-!́c ) JnuD2)W!(dxZhߔ]U].-b;9wpUxZx!/hYH$$:%Y~/bԇ$Qf,N8E+J]\PUYkBP$J]X Ӓh^@EzXvHΤv91`*0;wC@?y:_Aɫ%*.>pQ|sQ[\P?w_'QϻE~$\otZOvGtJGEL3ʸI"7PV ?(PЕ0kzևk^KUɛjyk[2%7nuO)7?/اwfj}5)/SlY% o*7(*"wd$Bk>!`ˋh%ÛrQ+A-ޥA5 跿@Fcb 4Jhx12AZ-'oMtm]F2SX0/^+SdCi~3uo\Sw8Sw&L1`^뤓nvKļ[[Pz&MLGv{U7lV-jTJ[q'֞o?iAk3ؗ\PjjӴu}Cnwù~I1y~tkTmo]3fe U-1M~bv|E7d(5 Z%ĝ~|lJɧF&+GEj)#鹜DD 68Wuҿ!Ӱr!HM[~G&~(n 34)$spfJfhO4c#Sw5i\&o)K\co];;rT,pSuxV_V%/Hrm-3nq „@h@(k!spmZϏVΚQ24~xb ĉ6VZB [_yY$ ;mi5y-)^s>S ]ÝOtdӢM=Z50j7=°.|PFsշ-wv l{&52-SVI5EdjYTS`}jR|G6rb=T-)ǫ׵#qǪW&bSJ^*Zo[3)6s31} {…Ǥ-\n+dheAYiϲdfE>$yu:XPa>EY|J; S aqœYS4;W%Xjeȝxϫh 8nBH#QkivT*;l-{@ܴr]É I k'`0•"-MM{.bb)6  ]T#] 8  '|,"dnҕs\׎O}2*EtK2KXx'|ykWxqߗY7wsL.| >Wq&dSC!sft/^ nΡaaa}~)9w8vpݙZP #7Il@tEa+"B+BkeP:=J0euHt3*"PJ7J2kg2Hpm0 ѺO P#] sy@tEU8tEpZNWmj@ttݫ` 1`0Z}ÄRQ P %"; \t5@ ," R CWYEWFz1t60la8V3U;~'Ъ3]Ci{w[ЕЦZV5`* ]ZNWꑮHW0T@t% Gy ]pQWVlҕ ]`킡+ke(thgzJWۑHWiL|\zj ]ѧNW'C+$VsuKO7tŦAXkU$`s)V6m>VZɥc.s:%6gwoPRP¸6Hӳ\z<:RKp&fTz͋%7UuRŊ8Mժc#:7U ֡l/!m`$•y }׼ݔ!j^cAp< P J.NWHW+k3!y[kgE`6 -ȾTfҕڐY!`6"*O`B{S-P:VF Læ7怱U;܉5gVWк3yBYkM 2#]i-u@t gU ׆BWV=J8' ٳ'QWVkJ #] !vL.g&"B%!ҕb Q=)Vh%뽺"\t5DrF{v@c􌔗GU.JMռܚiBZCpH`hw&v!ҴpR+X-hS陆Gh0: (^VƦ[9TzɈeڐ g̸`D!­ ($R]JeGQ8@Qh!6%\B+BkzOWɑHWkBARivEWCW2w"Œtb6lz;aj4=\+i#J379ۂHW6=R<`wtE& CTPj;JÞ5t9&"NWRHWR2`2 "+kl(t%i_@%'C+T+`  ]Z=]J5JI $uE  ]Z+BiƵAX"6 u+D+%#] n5=|r)XY9pw!vR'y}N˜W'V oэM?n~\vYohso5_XnGyYV'xoM(ﯾN:-eCzEt[LB7w8]o,O|ƲQS`osuY-P媂W󞷋oZ[}[Rw5]z:u_vfϭ:F,7UG4uKWy0v'jJ?J*\lf=# =+}@4$ì[^/9,]EyʘH RC=SrϔJZs~%Ɖ'm.Yñ Od>c8 %I4xcơ0,>=vlB*vWa RY&/X(LpeƠ _+ȳ+DE rSD%Z>Be6y$,p {׶Ʊequ03@%8FPW0M "gm8-VXv"vvw*Ќ}WzNQhYQڠi/ d:j I5K[[jVn›m,PRym&5!z-pO}F,:ɌaJ9h춸ѹk3SD pzG-&彾Ց&:f%Uz݁6lәBM":JJ1T`9Mw!0ИUV{j&"0J)YЇF.\>E=uk,R`榤gE¥: V@&Y)dWH! Q S( ٥fّ#Dȗ*1 rJg'`b |ZC:m\qLOJ$[PyU벫ԕA[4id, |Der HM$P W2T j,K8JH&`AOuJ+j+1w\ 7CA a ȸALAAXG @rJB\P6{iFn3%!2֜AŜI' sG !.A f|)*)ԙ5TD1R r ֑tA@GJ LEwf+%RI9n,7H( ;:IW qj2 RVW.#{/(uYh 1Lcr^5 5$DjeT"ZJ( er1! aUh#ǻ=sAƅ །wA[t{^̈KQE;fD$' GcE( / Uy7Zo@5t1mS<]-^u mz`-$ >:%@uPq>m:@GV+&{H\!iZTUFB1L: !'q֣2E ) >@E&rZ5ȼ`|ڄLktqiy4/! OݗUd 2inuG-A@8nOd!T? y E*(@6T H"]F\-xy'K =`Ye> ]{ #BK|u)}юAjy1i"!%n7 c`8Bk ) 1E-`JI%PKcEX A%DEWHPl5C펁5$BjF,F,Z{65 PMȚB\=v"(Τ$1SQ] C VA;8*jV,*&a!dE(3A68Ftbl[NX)Oɽ_t;,I£j4ZIP*HxnQڀJMKKުhQExGBZF7i QH6 WZR0=UK mry8h>y貞=Xx1zZ:L-=A[.n[`3 .-zLn~!\;-,F ݚg)DzI(ΓDCkBm1&P = ArR{R@pPaRDluHmk"* F mEדNJdt  J`Q Q%fdi SRC֢6Q1l]` V$⤩MkurSp͍r;XIo_* Q 2=Pʢ#6:XLÎg=uS[sOҕȪTqQc@sjo6i]0s0J@ZXf=ؤ=Cɗ Ug&dd, 6#TK.dP?uN^y~{P׮BTt5у7[A[FsE D>N:jNàe+J 'zʤ蹐>?Д n F5>dN֞ +JOQ!,iJ ]\1Fn5 q!X7i̦rvi J.ȎYxЬ >$Ct.Xz꤀%a4QAygjp N}TQ~ԋB쫛yn׆4$xr0)!:lLjgJYr黗/x!y}:9*=q{ǿ~ٿim>.1avU)W " Z{oV]^N/ym|a 0 ttk~\7қb}Tzf/~dv> S;vra&9x٘At;btBb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@Pb;(v@=^C 9{2(8}g!JoPN]m拭Il}N˚W1xD ղ>EG8{c 3󳛴}R<^"A{M+o5W`=4bծvZ My6hP(y9|g_B _WSj!e!goxwd]>EC;] bYʒUSf(b̶HQ5:ϙd| q`QR( V}d_%*WɾJU}d_%*WɾJU}d_%*WɾJU}d_%*WɾJU}d_%*WɾJU}d_%*WɾGT#OW ޜp>_%}ʋ#k-*c_埝@:"D N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@z>$|s2[oep5]}Z ]}ULW_;: u:tEp="tҕwBt2 ]\}2ZPztJ;L??x2\mvg;H50:)1Cb_n'h'CWSiBkCiB%M;%]]m-/iY6/VY"jYv-Yg%}~vc¹6բ=ۿF_5l6Ӌqx]ߺgiX?԰:Of^}xdFU11GW$Zñ B4FyS-/W>*}~\tKMBzm3r]z'k\0QQG8\kJGew-m#IACs8bwZzt5ErJ,|IQEHP(Tef}Y*/ŝ4[{{ .))hB!̑)\7Drbs#D5E%J [ʪCrDͨgٚUz Y%M _BvQHKm8!ha K Ѥ8E:RH<BڧX6iT ܽo v_z`i9< k `)SH1KN`o-AlPMIsW?G"&XSL)zRCofs_=quzA3R=!Ry"pS)KafuL w1Xv#(vLQXg7arY9 fipa-جGeߍnƃ<ɮ6=oGLġ:m\[R(LD6rc0'灉n-3v]]ecS3ũ0s=&#]:OII,q+Db@(S|1cE;IȾA2kAo4:38HQ)"s$=6؅OT0*T]NK硃k_Q=h@a( '~لsIvvfM|<yx3S;kH)EВ)E  o'8 &gk*OHؑ#`=fglBT 6 ElaDzuvXͺ8e) xԇ}JU4i٧[^Avu򳝚JH>J6hz0pϜ(x4?^%|em襙nԻʥ̿$:VN\xozl94[0Vj5A {B0Ĭ^Y$L˳Aզ s'?f1ʠ,U߇`>Le.|:}wL!*L@-zZ`ҚiBM;Mz%H% `jH+[mdjKQ2XhT8ئqp@8ԤRRKl׽/N}>6xo).O U 4G S$'YWq^9W#z3WC2-hT@]OZZߺX$&&fR[A\RءWBG ?&) 3~ :$ĢEIʛla%iC=QBDO1#cLj Nm LEÒklbcdE#DsL(Q Eέ2=6ArH?B"IfH9!c+tDi}^ fUB_0ʀܗh_ ;:Gbߌ0a&([j-c'r)j#1gFwbqg5cVSUOn2h`"vހxatZdZ|EߠG}cX'f NFy@ԏKR{i8+BS$0LFL c8\ K*Ljd<)V2""&ZH0<)c"ҙlWR4)C5)ˎsg1>@mfĎ?A;`hfjz_[G;}ˏa+ԇΌD,@g0m,:1 bIYX$A3 ?apQF*`0/0⁃SL <(:jPޝsXHI<)"SDc1 d1g;Jaظ+ٚzYihiHJ@z棫T*%6WEphnᡆw-3d|G(!Ҟ1fA,͵yPa W&r hPG+4Q3/)Wʺ -ZtvW8$0QOE=@( ̄rMʑo܌cI$B:*W$ёvx$S #BΌ?݌PrIN#2/a GPHS-pL6&)j)pPXb*sݻ6C?Y(/{PnTS>kLr^UMGuA/{*_TuՋM4 3q4 ,tCGffXGA=l{4&qA6l0::B+NFД0y#Sj Bcz}8>i8W'TcO^P^Z9reb5*bj7s[ս۪{W)۪ԋ*̛Z7]ڨ5Ub *3{Γϓ%g%`d|~H"SALlM\K_|v0=陋EЈޯsajο.jvNa膂{% l,q0sQ0PC')=/pt`w {h9zc^V+;qpŵ)"$f#)m#J!im̢3 Ski zQb4^6ϯL9O{!N.&ȍ]s |2UB $_u' ×0ëW3WiȜx4qޛ \np9FS8ϝ9"=ܳ,1P@53tUu+?TJ0ëPU0X=?{0d.n-ijkeԴw37CS/hr?\ήvc'x?7k_Yi;@էIs\-<{ؓ״¹U.|(^$8|`(4QV-SѩDzǚ/O0\.muvx+uZ_X>u-{;Lh(E =/$1ϗ6{Usߛ2@3oa&Vq͍ ӊv2o؞a;i|5ߖ@uFADP^\Mg-Nj:hsC^3 VY|Z#x=&h}Nz ^[+hs^ZEe km0wP$RgayV Hz4KD+38BG C, \@Fs>n~^K#L+"rJcVΣAZPK|sD%X!Hg7Xoda0q6(-ŷ:EF?ζxu<|%,;Ĭ,. U}PMg?;g Z9 *RXUE`p.1 ؁p^mɏ3)%[ DEpb*p}5GH+K/ gř:=ȦQl=Dl ha[0 ^Иgfk.I*ALo3<`Oo^)YnP/TUrUW@LW_ j%I~vN YO`9wq OlDC} VĊa/CjiCrCcك]igzkz~]U]]U *񇲪uUgA[R:5svuJC HOc QhdAV'΄s28Irdrn L } {KE)RY19ղ9nFnlV`Ug.,3I_mI4t&}m'ZgքevM*y׃qէNڔ7\IZ]+SW^O+ 7&Cߨȡbȍ`2>!.>[|zi5iߕGF՚ZyyzK;lM\IХ$hά-9b"\s5;=9%%k R%wԳ8H 9&q=UW#R 2 H%傩DU p&q.۲5r6t5 %.5K44V٪^fwuC?m6*Ծ!~?~ܖ!AGQ9l\ rL`ٜBu+Q)D– "X.+sqsR0^Ҥ !q!qK80b>ITo9|lLmW10i-9 " N$c 蜡h>kT%@ Nw_yS?-> y&G.OD "1Z8PyUV\j=}ӱWb!D95%BVc [ xc;kKuԮ =mJVq,7Rj;qQ*2B($RqbTG.Gˤܼ sP Lw_ -ΛE~;i-{>VC! ߤ]8ˍ0.]oaTsp0S\_{s<]q"hEe(CjN(Z;^gpF#W1!۲7(N g`=Ew@R MDȍW8!pZFN3AɧT?.B,\ч*3*.]$,Okh~tTrM4e oZo>ʰ٫ q \8GŵwjZu^_+7OՍ&ͅEbPŌ Ov> dq\&ǂe#it$!b0l0V-fYޡ F*̳WAj:ѣx)—Q/lԥ徭C(;a*#y`0'#įTdOKzHApǪN%+:Uԝ".?";;}uWNק߽zwJ9='1q XP~ڎ1ikho:47bk d\+/H52h{aVި.5L_z͖YR Gl:ڮ7*BEEnRep+1Cb6QAMp!XD"%k;ޥvDJ97-!4H$?$ΘL&i5M"hgvS2ʄ&** ,Qlj$i_Axx!2z - jkTCTKax3K.x[zyvfw%h{I68*[OJM(ф+sK`P$hvH[EFet1yOr9oeLL2z!9 %5r#c{JkXg< iXir?'Y3~y7I ՎFI-s Jw~>eUT<%vhs*0(gi`@8r9\i`ubRnMÓiƞpg6A /uBێzaU LE6"g=bA?.ڭqCQڬCN]>р1ƀOYQg:(K)J=O.5L}JFk7R.Iт4 Q3A#D A1`I |L2F5hvYQ#LJ\Dl?eD"v2N4(.XH%ꥒ{hv0\@IJsAj"OM%h %DTڤ4Q *g|P`B@ ͝ mFzD#i]u%E2.;\\R(p $4L PSęश\ Xe2T;l;e{V$OPjaVJP6b)ғJM4K TB>N `)%e;D d NI =!2AbuiL2EYt" ѢT19m7 mW~dʏg !^|.1?o,Vw>/q8 7JJDeS[44%krmjWАH1=E3R>(H)ʈS)a%' e!nB<Ӻ$Vs2Ň\o&MӒ2Ot?jעU7tzf v}&Ų-ܛ~~h[T1DZ9ԋ[jJInhg>e΄4ƞT}pT027{9/ϗ`ѬVIȮ%U6nKKa.b]+ >xυrT'p-[˔1Y - 0&ok7z7I8=~Vk*!#woߎZNUznv墢LQ jF̶;RFjEJe y{ Wx. m^ۗ,zgby/\*lhn^GRqbuG͝85[ax2j=oo9K a7eϢb^ȍ`2^.Ѭ9dO/<7\ݏ`FY2puOj~'J\I9\m;zȧL2`U&W}+Vpdr^ #`U&W}LbWJ-; sB-#Bg{W\A2:\e*++뾙=+$X37pj_*S+w^T.\}=p%B\eU&*gZ;TU•rX"5< A?3-D a2 X‚TZG 7o~~[䣣Y B/7.otm>:R}wnAW2jAtUqOr7EkT7;*?ŭ9xyp0L˂oJkaDdS3\;sz⢃k>%VoG+|ZqЄ.oY=rzqf % G0 3?/?1+! vUѷū9R-uY!jmIZ#hXbaOɞ7Z#[6 gQT,epT,Uk5F SW"$?Zdp0F\Iqaj%gL:+K̟~{ V/J~ϳĀLYp(Կ><-V>~CQrBU*)FWNm^%֜.M`a:ƒK3(Ǜ 6 c3#J(!DxxybS#yǦ۰?یymԒĀXZ-EaNe-bGޟ:$Wh/?RH23B_Ii7g7kľRkm_1 :]3Y  6ٛL碊UVK$3sݖ-ڲY"ql*=*|O}2\%QNP0@N$Wd9C W3shurEfrrP\pA b#d,w"J|;r=3 ?j{?~|h[׏ zrܢN /ID)rE-2+n+(HA1rŕbJ+"w"J=S+l=*FWRh]ҙQNPKrWtm?ܡ~C֏2TR\=A;dArq抑+}fǓ+}eS^.WHa)H؉rp(E@rEvtW(WlArEyъ(J3 0^\.Rh]RQNPkJjBAU"Z's+e9QNQ4(gL^2vUh㾹Q=9c Ze D'ִr'U-Jٍ\-=Ȫ 0AWα0*VLG.ъ0Q#Ͱ.HX#Wk&(:ТQA\ÊwB09m(Wo"Wb>zFWӑK.WDiǞoGEMH~^\ =ų]jT?JٌtC(W-zn$W "\5 u\\ ɕ`O#Wl)rECME 0S+ /Hǒ+"Z\\\)Fۂ -㮈Vf/WDGwurVg9 AU+pAPεBZI= ʈdm9] x)2e(2}297ƤKvƥOپgm֫j c5zӰ>RΚ)ۂӾ?o߶],:ۅFqG #$XsÔ~ltkkHf|}\i:j1NӢ$xx5T7]U}wݻw6_y~O_Gn&~յj@J6 +Pig{N.Yg{\ȏyHd@㳛eMMڝ#;<*rc1 fNHD<]+Q=̰Nx3گo6}:JK^LŅ7c2&5R ` (%h7^o-!b44kAN>;AYTt{1[y*|(_W+}YE/Ã' A51%kYV9܇v`T.z|nӛ%m|:?wvb[/WX76kYAОa۪D.ᄒlCzDY[]\Χ7ww:_]0ɤƜF,]rylO]t^_AMW{S$سZWJFq<: e1/th7d :YЈtgS\>%$ Z| J:&N9aAp~VXIIFitX\WWohM5O ietHh@{ēfSC@}PfXPm|F?,t#m )2˭ioBFE0>8峟bT?c=jxâ=ƖX \j' $ 7 `I7ꤙN33H6-ru~?i}s)pgWu=~3?wE>;Oq{Qm ?}kk}tCG>3?]+fHlDTE?p5?k3ym jl5Y}>SOiߵ2Q*$.~V%YvnKyR7Ve75|'n'e7?9۴~N[?`ۜVgmC}^Yi3[q3K~yAsfm9Bxhup@|I Ti[Eat[r*o eZ}> ?1-.l\w@٣ Z0psb[>^=t_րt:::aܟmB㌎NM-eԩ'`@=iE`d#t4 !DZ5NZEcFR aL%1|͏(p4jTFiS öOTJq[t|1/8;N?-T/jyq,X;N7|ד JePWňo6_4҇HC1 Ttr 0hBυ`0!L]1ĐDƇ8ƽ} .#ܴվ{(_F86yߎ@U@Ŝ@%v\̀;cduuv3kepyϽ׮-v+K4l/r0j>iiF)jm"J5Ę0&|=Q#.Y#רLmdf=>MZU Lf2,vf72r*uky[l:@엍~]6?{?uc͈lGE>l!W/u܃I{_jݾh%UN<6xÏVo ,Ƿzqh :x*#8ɒ kB$VY|uT'ѣ{=>YM;>e4>i|-ցH`RgV6ɋc F%ŒeƆKy@p!)F`\#A7R8kmpy709h8O7BWi{F$*c֍lNDGRz<6bf_cTXĿM۬}5Sm3}dAsS%E*07m>u||`gݱWb93*,qp5FZc1^\ ܢhQ}k9Mk7oÓ_4Cg֗ts`GіԜ=m'MsBҽM ^]mj}-Htw-,Ixn ^Ymj>v{J+ܹIo{񦯻<Rm6Cq?D›z:~djn{&5o٦x?tmmQjKFncS@MBt=Zl+-r©[cMB0J1hs`Ic1vcLcw4 I(`TxS|@bAt6B=j޵#E/{l;|Yd0 ng0aFb^I$slǦllK-]U,~*d-hA!hBTtBA"@er:KĹe> /$G'_v_v;4{U]1ut:Hdy+d{ GY} َ% 59:2R<m!jZI6xߖHKS&mC'IYM%%dʕLQ)ѰQ(5p+qn+q3MlqsO־bJn+"K0v*RU[]XmTBDKu`:F9tJ9ڠ(K\1݉$DRY%b*ƚʔW/N[(MVsR$dQDkY.p)c"f<:k4(ߌ-~(bAz?:):p\4'KYEf'󿾪s??/jX\x?~%hqxZѢ o>pV}f5m`N%9jD qf|V5D$P"`0 tjq= 7}~ntr_(O拊wzL)e ޞc2:ޑ,{{zONt@]ً\}׷{|uj}kS+>gP zN~řNޜgWæ kaxR}Yߞ aUt:|ɺs5%4[suKGQYJ 1qk/֑xreѳ6G57gت׷MnWrfu?l6lX}r<=Q ~)xpɏ_j\u_6,Nw?û}w<{`ceo$9_uW~|vVMkhT˻^vm5-oyC_g8ހ~?wx(2ul\lRu&iVȁ;ƳQ\~E/_.K9{ (.1~@vz"8IlDB+eYGC!JgtbୋhNα3r&,C+.N:<@o7w]q <*5Շc~,|Κ( Mib%úNuYS}7b\C30 n=p~+:H48qm>;ϣMH^UC'1LqWy}N^"yyJ%0a?ȳrQ֟zeUSe G( $BバdP)B,&oDJ\jAAY񟇇ӝ+trnPv@Z!25(O=?Â:l~ 7@Y!ht]:o hP{[<;PPN- AŹT)y_$ --$`."Ƭ֌Y\.QT2ǧ> Q ƕ0E ~bL1UaHVK`U–AW5HZc6u*FW)S.HP麜BL ƆL 5Tnʰ4O/Ӿ {3= n<îcv-WymX*lI#1L$1Ƀ2^,)E33Z%弭shXY`*>D폅&vR=c3qnJ3_L3 e[_ /O+2~yH}%՝_V=6KQA!qiΒQe33hr4_@cUi[d 5أAiUݦ-[}c;<$srlf'\+^vk^[ ^{@i|(22ZR*K.+IVhlBZjXR6SA:##dF8:#E*+"A!D:#&j&kt\ ]Mƻ]=nxmhT,Ap$5)G{vD`sb b-^"0HhK%rfZbQQ%Btmuv1r 5gl;L=o2?~q*Ksʹ~40/^;Ke֨r,EHJfaؼKL\$^S} ).f;mŝlw?w[pa +=egUe$!S)<\)7%?z' vs&o'OItٗA "RLm*C"v!Av"$ȝ#U;Dv"2EA % &R'<&0 mqIJ*b P@"G!mYh#W~9,N>$82%_YCm$H @f7 q3qn65Z|,Q30j0eox 4ObN>uٓ6?tqc>p]2Į:Dعz6Y&;l 9Dh݁ /]grCQGLx`B;(\4ѥ cD 6?~y#DvB۳9K{ *r mcT8񠞵 Y3qnigWAߏEӻRΤ(DmJ]NgHR!\`ߌ 16LHY &>=_:[G|-Ԟu:BMI4'Yp4L!Hjp8Ru]eqԉ [FpΑ7hH ۊ(5% >?{՘P$ةHtD # * g`ZM1 \(&`̆3",[WSZA E`35MQJpF[*ƐX2)q\YvJ`0vc;E&LCjؤ#Np^x/V[AHUBlh&Gtkg/ֹk *x?zkT@T]~EE~I[;-c>S>:~m _*.[`)Jɹ@.dG]qɨ}Թ;mx -YlSЪFb Gh4(5E-S #s+C*k{M_ !jfEyMxoqjm$[˚Lx~;jǪs#HZyqp##cGGyUYG/y|hXx4]:QoQ$$:_UuNE6Y{H+KTڄQS%/t A[[rZGU-$H=4Q (Dda|%r/*_.s3$vS6ҵ7X3O73_f4^#<Ae}(QC$}.Bw>&bГM#ҁ"q=[b;ξq'}W.ё|yO_`X )f!"#=#il>8kɮyB%$Z#RCȢqIivT=]U]]4*s9UP.Q AL礌*r.}ZvI޲L>}:%vڭ}XjyY[Xeĭ[LeAGᝰF1M<K8O}Lj]w/H _0d!4@\?-A"s 0sTw .\ZK{r-1wp?w6AM>mp|t ;n K(»e3s+GQQ"IĀ2 4.IYG+-,3vg4zBO<_~= pIow^$aT}LbS:Z!O[ÐOo~wt89zZ2c0). ma S Ckzi>o4~q[g$C^FHr0hOӟ`z{;\Qxrq]ht)OaAgqJCJ3V&&DտW oXr]r\Me:\@('hT9'd/=H!K&c$<%F{*4w9|7O'_kſAS{ƣ(!{է0ĐϽD^]\̿'wy{'W4of//FC띿Aa]%}? 1{.?9GW)xG;7pPuwQug7z*EC5Yo !|nqgg߾MoFߤi!O}uw) \%_Q!;%|V}8FTOdx/ՏqqNhg !o[ɔ9MQ+F̱n>\gw{/+uZm^ 4@toa^b| ʯjTӺ1[8b,^WYY޸a4o9~Y|u\e4s8C|Jˍǽ'xrRAI^UN|Mg˻DWwOE@~IK?F!Li 7 VIn8t\!uka@xcf.㮠1;˛ W͆5}lxO:cq L*w9,G jZgy 5a6=aRT]d3zV 43ۛ.;TLD^Kʉꅧnm*c4Q+va4Ny5QX^4bDTF8dǢY _v'Plӈhx;T*>8b6C\z^N˝*ol'v3}jǪN>l-C)ONE?ͩh˄%LGg@VL%azV]OԼK$Lzf|zz<1\=ϺWR4p@*n~\}hEF*LcldKY"'9{yA%trL>gkv؃;t۞)E[[ݖ&G>[|Hd֩ FYr]ɯ|\OF!/JDr{!`$"Sh)C)BrS?{JnLENR틢22i&M/ D1LUUG,q+m"%#XUZSL.]/zɽ4-fsuM&kkf-q`0~fNaa} x@2?Hְ]d*-< _Dh7 /xUX3T!L?_ F<@M VYJc 8HZ @ 1P =ˆyo@NH]rƪҞZ#g+!{bL툥(G ^c$R[}5P *L&i5M"hgvS2ʄ&*[JNmSl ǿ%Ep,£h%7F &&]J-r *TGN1Z0u ]lO\J4 r8_U[}yrZ!{\{ꩊМX{I%Mn LR"0o@y9Evʉ+98G &-Q@&amI#[NN1&  F꘬g!*.!HUZ8c[,b:,|Ie֌7ӊs˻\MBSɯn0S|)1ϴCSQF9K3ͩ@Ƙ\#P0]^1lᩪnjƞpg6A DFж#H J ڴ]-r# ら+wlڬefjw*\ځjSg:(⯨>%#CD@po=Iт4 Q3A#D A1`I 1 /8$Q ڹ5raeNf`Dl?ED2";D\,$ORq4 ;.BA $֞P@m VoDpD@ NmRZ3>`(PTs0! ɄNl~2"FzD U{,?.:[mqQWz4T \L PSęश\ Xe2.63ζc[N b=] t!G ܊!c\p"HLdR.!q\$#zhQƮ cTޡGOB7g]1{87JJDeS[44%krmjWBrUW) (u;l uvKg>NlG`c*sw)ԔR1C6R'xyJc쳙[{)J-#q|Yy4U;wEk9`;yvVP\kŶPekR<&< yMwC 6~$y&F xnBۙ0OdXd&̌ﳌqpcYTB\)w!”Wh#r1{9q{CQV"v;Zbn7 g c"!& 4. ŝL"NuIȅƉwϹIqy-mGFΆ6MwțN[c >D3?/1'f[-)r32 ;$+'!_Zn.qљwVJiRŐ2tH#)$I@GLR)Fp)x/kn-h즣<2q ֒#qڠt҆ 'Bk4% *holqNJqOau͟LXS F쯩f$=ry$bȕ܈rh*/泱;\ CkiD(M)$\(Jbvvta O} rgpM "=1 i~זּ`R;iLjʀWB'&l{&׶\uϕ_^40oqd_v10͎avmeO)LUXH0D~ r, ^=L_/fw*KByo'8I"VeQ(Uq"CP|U@٧ z⥒kj(9m?l >VuK\+ɵ*UWōgjZu^ÏΛƫYspQ9e1'A[nvA6rZyp>,I˥# yHˆaˇj0ra80 g0b^Gl'c_:*#G=Q>r&ٱ3KdF㐻 ~"{>-FUJ&Vt'Uwַп8CvN^~uyo߿9y2srћWg` !XFVmKϟF ß0ikho647bk d\#/4ɫq+kk%@?]jOBRv=fVq?+] (uBMU?ܦ6*y Ġ",uޅz1ś6J6 ;>4?TI$r™RD?BPxDś[2hL!s'\@Sg'}n JzN) & lV F O^JZdNHv&I봲3Ti'!@[ԧbWzsZՅeBչN.dQG516kr8Q[=36`LL g=5=UտGؤTB !ga*c4x_'ڢSc`pIxeo@Ϸlcv]4NY2|fbWl=z"Hń vkj3ٿy7f%׈Hux_`\핐HМK=K=,{ =(8wr; $1,@o i}`%є3^EɵRDB[8OLӷ ieH!Ɓ#$NZDS#H[G*r ʂ;a`4,r}Tzyfq^1CHB(1@r7o kp!#ZjÂ㔆lEg"]Lp;CT0:GOokL3 2gl#|h6d C$%,1Fɽ)Oo>ଃn_ǓtۧXu{9hjp2x8xp &G0k>}%!Tr}srv7E)O>|ugg{2{x#|'_?h;PQZO s{ 1{w9͵yIwg8^nr]rM? 5qrxx.{o g9{i =Ϳu~wx!>2l,ߜis3}ڣ0w{aB=6\bgͧ\[Q^\kϥ=53" ->4}WRV2e7E: ՟bpGT]o`" 6/c:Etث2!WGETW Ͳ.}c/G Q6I2o(;Fg9gY7qu=+o)GXu UkC]r1 vr@9b;/TY0D>ݿԍϑݵs";n9:k|8l4]31ۻ9MN.:Qnn$N?ͫXnz^d o7cֺӖM̴"CwyDž&qwf杲%9QC{@6t+7J߽P_C1mV; ޠNH6:EX8AK4@P5iS{28;y_^MO^gzD= b˘FD #QN%ʮ: 3ArΩZh q+2tTTybayof˖fv*Qq.F("@3(Xntce4mif'$ZY ︂ZB̍f_][㹷r탊<- QEf<"|#JKvJyLPF񐌑VR6wQX6h2Z A!Drz09Q""b rڍ£97ҽeJfKx6cm-Z=50[l'ʛuBCfD6t(c]Bg\P[D#ʊ rq6XT)2hRGűS ܻ,#l,VKi4K SIn%>I8'Y>Je m{%=+"&|*%)dգChU(!J+,gtOp ~"R7CQ ػP N }pb [sz__  Qx9si }KviQ\rh ZE]"iBA0@Hh2Ĩ'9p%tg-0](竴թWO;w>!/p<*&pdԠȨhr͢}=2ۡYƘXH_xB?2Xߴn|7Od0XjkͼRiB^uG:*`HGk""[{md>I5yA?7uZXv*g/*zKnWU$ #"E QG! &2bģ.S\ ~t\]dG|j}UeCՖɊ՗cU3/'Dhq߽݇\ghc9~[Mc_J2\u[U7rײʬޓL#: 3//Xzݲ-2C'z9zq4oH~a4xq:ԂB !׆Mq[evsD7ro7GP-#0_aʛ`, \I1PJE9k "I˜E݁\JPH^'mmT%20g:H݁YD ]_/1c`(m8=^H# v|<"-| Ҕ, fWZy1Cgj&gxqgOu o^87run<>Cxr0:(؟s <\q}aZ]jQi(2`rPGU*~#g͞yAT-{ׁ/<8@ޭ>x6 ^|Ƕ掃j-n݇ޡww*EE5$T S*Fց:)bL CJ5x 9 $Q.&"}ǚtɪ%Hѐf ȹUРKZ7THAK%䘒h"iۊ SE&֝9Xfjq6Y_0<a햞] qq};k#} nuylMwwy]fhz]ozqu39;>]}_;]ч7؅{ `ܺ}x}8z?z:zw;sP;GshGq|7kx?iw7~sw:Oiwڻ|[Aj;nnB{udԛ<.9z/W0X-vƛ ZcEكYcz扯X(; X`R"$ Vt8t;h}xtQn쑮]9XާӖ$ܾaCe.;&%!Tj L)S?(3/?0[.nV&s]lgƞN`}o?ƠFo|}v/KZӤo. d%EL?g:J+gK|- x_Gg~3l'PcMN*^ `m`e}(tz骣 tgHWi{:\:ZO:J#]=C:3ẃQWz~(Ñ#]sF+Tt~U(hy9Eotl]LcxY&`\9/r3ެ?}(ᇀV5>ȋ/>k[e-\ _-v!^*h(TՁׇz퍓ٟϖgmMN[ h #Taop|bB:ʚThLߟ~^QVgv)l0>ʵRmi;6J345Cno^zfG2oWc%D___ *cyPgS7S͜*sL\PGMf>Nc&y,L^p!]wr0Ow3tޓu޳2`=U:/b骣HW{+pVpGUGkS?Knvp`+shI=g;Js,K;+na=J1뽾~|AnM&k{26nOwnI|M߆}'՛kfz [bёwV#ƿ/mSz~'b>y7rvǙ5bCf?~t >H xkSlwu}vXzCܳUofۙpz+5a|m5Pw '4FG{`><|BxXj˰{Y)[ď+İ6G qrIj4ZZáU-|%!gU:7vYGK(ϼ[ޭ9eE|:>Bz`dY˫wyyfSeooRtE\%Uؒf1ta t=$/&D:T$1:fkU, ͰaS*&khj^Cal5NV쇅Ԭ˥MF,0ގM%f%ϑ0 fm[L 7%)F,Z mt@֘sI:Xi<ӸfzbU 'ZK#P#1ИMٴViʔU rQ9_{;"Z# &zh.VQJ=@ܼ) eXD)e)5LF! !劉FV[#<8+LQ!(-^DQW%@ҾVd{VeUM)1<ϩR.mլ՜5)sw:r0%f0Q\;Qj)*cqu~1 mĄZdㅧ\`5 j%}Iɔ8$kR ƥ!B`Oi;i]6gHC2)Wblpf}27F L>y(R.Q 0jA 1 oTTtC0L%mT B i%v l`rB`)O- OdbB m>)h-i氮W :8eaD)R+}(qA_,LoMx$(X,xM`& R!!.A DWT,I!0Θ#\hAAN[* p;  *7aS1D] QQF ,o=̐5x7$% <%*` 9,(e!Zܮ8gho'MJT+#mF; 0d$TiI J9H,M[.3X@|_^]obPDRu)M|r5pՍ9B~Fɟ"oQ^0 b^8?{Ƒ$B=5&`N6k s$~3E|q>d"%#ɮx&9==5=]U=> 1,c8RȐ8 ee4(> \.k[&\|MBMڔ$92[ S9h!"hn(vO @*:+p Gxw&0/98.f@DtAk\ ޲Z0绝UuA>7!Ub wЪP:k6@h–PשT" &F9 P`<z`+=Be4O m!JX'V BOaFA$%W/sQR4 vq) j:>]g˵K1Hy4?<[=x X:Ygst>GGg%a}NߘV8^̟r5͇tP}\CSY6'ka|޹{{uu)x5J[p>g}~;UH ɐ@(|2֢{@(=JcN$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@_1 $!lQOy:$! R $h&H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""HIH 6KtH 'C=zԆH2)""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "zI Z<%Oq5{2$Hk', 5@ݛV D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$ЗC]M0tgj'_Fjw]6s5y8^柊@3!ppv ĕaIÀKR?2p\r.у2,KdS1W(`\RhA9bSp_'~:n k<}~ptGdf1|T |hj2={= 0>!VGoŽ3=9o_?eYz z"ްhno~{Vm qz}`]b!*I4֯!f"+x7eBD^̃ep`- OS~j}g.WFxjŅ[g}tY/ uc0_,~{}rc'Mǘǫy^II`F0ce5UDat<*!>l8"#7ߧk.({"àAcߎLlEϠע2ٻd߮au WM z?^L>P:b\U=^b@_9xͲWM%ަ]OK[֮ϛ65`zHI;o,' GI ͌)#r8R{ppדAZet^/ :A5sj T;QǗ{lۊ/GPi\_b ׫cb^UEEz>_O;a_fsU-%,y,7UUiVUe\PfBF-X`V t3!"%| C0w.$hu1=|<\ڛ] 8~U[Ƌ|Cqt$J\3hK-hxv (,0КG!8m0wq.D4Yn>?D0u{)ԃ+?k%QUr|t GKw} ss< F/qSJWG}Ǚ5bTŭ6{] N-=nNVz۠{ L%Q`rYFd4!t%P .W*)e"OC3qvu/)|KN@.mw;<#8+ЩZ{3_[8ݹ;1>i S"e nLBUt$taҜU"FdTfJ"|}d0~g%DLLYN,>߿m-ǓdVQF\Jd"EJB#ZgɳcW2%;H,{+}]-:х.nu n -_ˆV8/܌7x2O#;}ӻ@iOK&s.77jzoch64~?}'yNj:> Z[5@tpA/J9s߿Xxr^2hB%k YaC<ַk?}gK[>"VT~3΃?^*E> IυLjaMƣ<ǻzpf}V><8DH:v]c; WD¾;T|O?XWwb~6,c#x<ῳO_n 2CᛔjktS^YX/mxM4`u+tp'4[7yXeG{oqmAX`VHtU?]l^/Go]/= 8/VyNPh>k7Y>skWs9k0'}0"/{ַv72!bFR5-X71j:}ЧGi@?!3>* ڼm45AikF6hc]xv3cMۧMsyA4Y,yB3Lv5 i-p5tZhaM eh>{.׏>/gn9kK)L6\FtoxTaﴭU1\O">c⦅-iocj`y}{Ǔ Z4W ̭ur[ S}wޫw%}]QhUd½FJoGbgOz?.joOy̻i?k MpO7H}Z`EDڛ?A9`?~ǯ~? Ӹ[EZ2;wЅCF_?8nh9Pme * F3=3 |_f@;~Lpb74&uiߎq$=K)4R(t2:HLq(C(*y!E g,+0+qvاn}Gӗ(M*wGg{m6Cބ W2཰Xʐ+].gHbp.&i)VV'3)slJv9˫)s߳Ji>;${'4zs(#wm$G_KHFC8s6.w/_yI-I߯zOYCay5驪UuuU>YDGEiJgoM~7.Oi+S->4|s8&+ڦ9-h7Nx˖@lPK~Q[ZF+ԴRA@*F43v4u1gunUNYΦƍ]H9nE0_ַ-x\3,]qjQkL f^nqU9Gq|OpE}NR$T{e?>*T?Gˣb)mL 4>n tO0D*3҃qWB #As:1QN$<گH9K&BPhJAI^EɵRDf활 ܮ#}8*{rIOc;VYd5uf&yTű3d1[BGDxTion WL`-EpKՆv8hKEiW|3s}9C'!`&LI`C hw"&j$4(kMH F:)bH%RC D=2:p"?%F9ᔕ@T< [#gy^~k&(Zz[KGC\}ׅ.!џ~v2pǜP*12*ƶMe4I[nBPߵ2?>wŮށo;fO2be Jxx 3˦ ]/H/:ix[0tda%PJ-LL2`i04W42ַɢGrZ8PBKk |2DEp"4D0tJF T[k4OiȜ[4j^V>rs(C2rd՘N2`ޞ֡^Wy'F8ELB* Iu8nKv|\ۍϵkaʛ`, \I1PJE$D11!Zur))\H>IPDRBHZcLH{Lzémw`kD ]h'̱k˜fu`W `F^ZĴlro׿ڋI"K?U΂F,5K)y6׀Oc Qh"tL8'@q.H'Zi$ګxC-@rR!huB5RYsرj^FΆ1n~"M۷!uu\[|u^fLVAxvWo 4YeM\K8'W+sf|dwH5'%?.tqkg[زrݮ/z^4i z^jFfwϼ iEt#yXz5as֜="mmviT%Rgo>jl/[nmۜx'.|@|>]lŷO56B|b }Vz@#f֖J1S~;ZNzNH%PJTQR ]H2 !V9Ķ[ yTJpB&S#ģ\0(9D1%b[[#gC}](F7OdcWQ㔈/zS$]3 C,Cf ^|??Y <1 >XG9F0[lCJ:ϕΨ[n#KeT`QX.+s4Wg9)jSٮ,}YA7T ^xbpT"Ŏ]]bm $Y)I QI;9(]S\h4Zvúw3i8 )ȝv8 >1IT{E{&9-+u#wՍ:!P s%OSGMVgwk}q=~d: F!Mޅ( x~}׳/] ; JE/E'}VT(aLkΞBQ@8F!k}Ñ/oT#Q@Jz&XD\opBzpe.rMS~>;p>U )\H3bߴJOW˗JF ݣQ+r2lh#׊T.8G3M[uwVnwT7^^O>L/.;AQsbso.{v1Kdדj['/!M#i8wo5cU01`X(]o\iUMerB?V* xh ݪ Sm`cVWI`< ;ĜxNiiGy"Aȣ1ΝpMkL啕~N!69}辎jCz6h`45JĻ1|IH,n_}1Lɾ_kh'V| dcO5:_8?po1;jxsKO\$ZUŽ=w>)9"_äO+H3eYUZS?{Ǝdp,0p',03;Χl3FLp[H,ӱ}$G!Xezeԯ'/]^OawPCe61p|SF MlT҃&v-f İn- 7\(@{mi)z`@QMZ.Aݼ|U`Y>&UA6X V9,aőǃ ulBOlUQy0poGFbD9楌L%F[2:t4Ȩ1fNГ F꘭%?*NQAy.HFjٍJ5,fb!B>(^ӕ^\f%hwpO)yU67~wB&ٗ%#J=׎*exb"9&i i8rG.LVElaDFj2s3=P:1jn4cO !x-Sg7b4M1Yǡ+6P{`Yy1fYIuҮ`vRL]X}2"D5N4(} d¿wJ)ɝ9ah`n00x-,u2oRy>4FzPC>u>OG.+C ھ;{vގ'\jn+f36}Uk3ahym'KJ5fSLˀD]^ۢMbUm̗m_^_zLtjkPU_AJe{Ր|)` 3 D=$_<۴&]|K^  2/zh,68CqqVx'üH[&xcH+efk&]U-wxG+ڭwϙn$+-+v<4}≇Xܢ8JaPhP<<&X<Q$CJ-<*.0x*=8!8' PF+ڼMHwoM1ZjV]ɱcB2-S#:F6^Ke5LK΅WW^o R` LZT`J,K)`giv-$$_F?}5 :Iפ.-G/ 7P[E%*gN ժ[(zC蜓9}볚. yDE#,)[O/?C/}5=5Zibitྙ B2l38Rֹ8xr9sC# ;"AeٞdRzI;Ԗe0eKz\<$eǢ:zx+VǺy:\P#B Q2Id}22f9<^GC,'ՊGScRU!i|9˽WP<"[wN|;Tn]W;W1:o1XBQk'QĄ,xJpK2 SQsyn\ (4iTd2σt\KҩBQ]Ydͽ6i n0Dz?[d rLx&`Ί~V8{ɾIMbtX3W!Bkɣ&Cwt"g}ԑ6o#i5]sG|%|lY"jI@. 3 Dh@Q atCwXaDt<cJ d,ZD|E E_U')Ɣ0 pz5x&j):g lrJ0zR)!01'$GP kWDlPGH596=e)݅9<G sbʕm{s;*jrwTy3.^d{M:iOi6@IfoO1`r0d|,J1QVIWfARЛ#4+/a!bW :ilq:noѻΖ4I-i9Y"wa*Uto;' }4A>i)xkbbFE+=蘸oH6juЛx*PfovB$CO2R _Nt?_M]{s]sq:2±J$2y6+/<;:;_7nB.LiJŪFѠceӫ<ځ݃ (`?mY}JNo!}ď A8]--x0JeƍgW ) !]HR6#f2p0heVk+*nJtcAV֊7sn[8֜ \^eH,h 8Pǡ>YKj[>O5>OHs`Sf!:"=I6uO»q =]vöMʉ þ菾 _+>0%- ^'1= \Cb9'eVaeg܋ ̘crYDmr[H`"q.ɬՊv9Sih/fmcp^odOXC?$rb}MԧB1ccHfn(aU2;F%18dL΃كf PlC1ƀhsQI \≓=P)@!MD$4fBi!bh1ZUp%'#` 1 xv.^*V=،t͖-csbt-vbqmo/mR-~YLg<(퍍ɳXڌ1D eii.;KH~rx14!ô)g$D9yދMѻ?N]}.oϭf :ys1[l2՛rlloDEf菓"N~=?_]G?}KRԾ[3'a|6MSU&^{A@S?Mv_I7>㿻v׾m5M?ۅ+[ofu|&We_Zd_c,btnꍢjGzk?&$t~yY)7ɯncz囷OZ?Чuw}LCu,Q!}w+;k}[ZSw_O?vƾ/ݯߊZ9I/?cM셕\u DSbhD^]މ*-%]hz\`wNDϽ!H,ۍE+Y׮8)s㵝._Zi篧}m4b]5Ҿv:ٸe7E8́c%:ٻڦ6$W:wn\/Dsڱcc m4DqQS:g[~zf !.x]S R&иM%x& F6ܙGAfL;@ c~Pm4;^ ΟxJ~nM[~,J$g\hJT&&_*e-ǣvurP8n7di'jzw dJQ"h46SPTJ'&:uRĘ1QGb5$*%h3ǩuQ9m^X"%)f#dHю-YnY:ήI6Csz~WʕfID8  s%^KYJ ^%1T;zI,$2Ax\G\h-Mђ(Εؕ8+܆ ۙ93%TZ _6S`$ 1DâQqZEeE_x6: X# @Ƈ*<@5 /ftr.Y%vʥh|F9ڛ4_|J,ӂ D!I&FRNqyR),αcCyqS˞k趡iݽx Akz2R^y4@}i2Kϙ+%L'2n)7%"YJqh*ZR"DB $ļrQh aFP/!\frٶ ?uȈ~OLK dXBhB8Q1Wq505bf6텤ql2@Zyӿ2Xhvngo$l}9'4URYFz!I[Su܁##-T@l饷eb `c bV{'iX{ӝçGZw*.%LjHQ)CTQg)BC #N 1QDJrk)sp}W42{Y4݌CQS K,O g-oc'W{ӹw[?oϲٞ _Pg:l{>Ҕ7z=F72c[_ꟻ:szɧCO*K^mױ +=S\^*jSUzlĨ'8)mR4ths*-[S|. zsڎs7Ly8)#{JipܳPI - eI.s圆~!;+gT$O~23 v"O}Ⱞm𱅘1v1+8^h3Ǘ..*Zhi##% Smqrm'5=%$6()<*x'b6J2cAduX&lٌtS>RP%Qϸ(bNYDS83qVWb=^D(9^=2~9!AT *񇲪p[unݡEۘgZWT7)sCOzܵsUOFH_~Rq~rc´P. >otה덆gvEHz\!/JhsڒZ)&ݷhi:AxO2Dz-RzB@LGfLT48R̜]|߉^<{j_nr\d9mSԦd],ɲ-Psu>#BwXʹѨ+  xgDN6ʢ'XY/A g c"!&dJ&Õ4. C2qr:T6/NR=զ[n\Sqѹ!4)bH\HJw^dŁY)R3RTe&M\(6L6T82X3-|͠2-Sd!}ci=cCgK{d]iki2=dIĐ/G*"di8vgo)7.NhS 1G$\(D%1;qva 5((D!ΨDv"=$mVq,7Rj;q x!|b8|'*M#weRv??~Ԯ~{4bb`~veO1\U>H0$]8ͅd]}ۮɝpJ%'@+*ЏUx(}F9Gإ!C=]7?]<߅3[6&"hBU^@ŵNWCz(eRu8흟.B,\чϕ)\R$M/aOj4,Psi΁qV*MNhzK:IjhCck .!0Y^sP_VlM'?3 BZҲ%!n鼭ތU\,q"F*={1դG6{omŭNgT(;a2Gr?ÐP"{_]Z zT2Rcⷺ^] ǿǣ~|S臟>Qfp%L&5]P~yjѴilo47b_]#{{H5k2h{aVcoT癙dv=flլOAqXWPluިMT)eryTA 0 F'Sޅja\Xh#ͣ{ItcC7^D"')-5MxP7O$hy4q.Yik'=aj|D vy}L6h`45JĻ1|ȑB$Ev_}\/9|[YfklLʞ{Էqkt8?po1;/#A졧ya6tyBwcz;ʝyTGd`8,v~0SU/=aSG";78N=ëG ̩Zh~@e9BT8qzI,QqRk$#pL$V j5ϪEpNi*egyՄ/% *MMPrUw[^M|xoF`ȱ =^T}{Z·6|EM58~rTꞧ(ӻ:ԶGum/XlXRbWY攳/A(0ϠR]Ed*i(׺Sj=11?fB!EJYhE%?IZ,#lpYG%ZaJSE%5Dɢ)dJbsVXQMgYJwTaC!oJߗ"l>\wIF=K(#D:{ C'J {vʲGį>hlgk bpH7a}bL1FO(R[ae)&!0(л:޲Gue/!sR @QHG)K 6Wy8/ eFIlb*h[4{ԿnI(/sauL5|m>7PᏳ;<z<;7405|T>PHነAeR&'tŢL-!Sa`Y4:H֑6Fb% $+qVkkYhi}\MY&Cnb6~Yw݋ a;aY:z?2p'/ˤ"sonV VnڀPc1BqHWd6T~ë́$cFU.p[FwHa )<m~ L]7[*Uw)K gbۖŶ] M[(Pe^r2Jr@JQ%OrI8\OCjwy %:gTNxQOYT5TU),R~U)}]o!Ylʭè[kI?,_W{aϗt]oއQG=Hi s4x9kMIBDdD.EY`ImcH.{[[!ve*AkWSQ Qt$6-4 -Ⱥ \ژ8ts}nԜSl#XNz/de$h=R^ `N:(@&0E7Bp< j}I"Pa$W,kR!3T:KigyAH{[oL^9m Rl D@H-PBh)E,.cg۔F79tH_0i^oGꍫt1:Ssb$,U1 /?vѿ_.tn릧-zC,wU5x@# v<:G-?}P(tWG #3d#0Ւ&aB_O7_wu]"u'z2QST{FgS'rz㋋75Vv:P8ҽ!Wm ݭӿ/y8 }6n5vt랖gW ٹ8Kwogk&êד9 Lfe:yx~7$,j./:g0~SD!6dƑPd0r0l榰aE4;0`}u%?MVzyb-ӂ֎*QW7iԍjy ԁ5_OS񫌪tYpǺj[.jX&d8'{-h6O=])+7crךq0BCךFI{|Szn IS}=%rl$1;/< Lg"Y[g#\m 8`ׅS;6[]GDߏiyqI-)Fjѧc.jbP!w(ȟK6Żls-y-a-oIN.fu[..fb;'"_PhʊY% XrMcDPɚ+0++7{#}E5MJ,6,FA)9ŹeN9Z~/A(0ϠR]Ed*鞂-"pс'ʑa$bH ymU2jQ{ ͦsϽݣ:@ۨn1 )jWbB[/* LJ]gtEQ` D+LizYcH2Y42 CYITuBt +jVTz_ݖ}l&'vk!(Qt(Մ1K%^X) ADG0$p2HDN %p:"rQPTRri2bIX Cajͦ*}w ^\VxCw9<]LqNO?M'o6 9 9L$ ElYZiGI;ߝDضK*M[9UC%{!KbJVT&S Ff+}p$OMITK]cn6݈Og j7}Q[6Fm9`nX %RJkR(jd<kmKq:ePdޢe4 3d E'E&HFt<(&sѵ?=k 16n#4Swk`7YXlbĨ))y@ _JaV+l Pen`x<`H!GLj,R ITZyV _,8 zlpZ}qΖ|:_/KuglқaF|t.x36:_K#kV2Z[251N H E{2b𬎬Wtv6sz~q=|k6JYOl'6zTD Ÿs vm$I+D~3~?9ގq;YcA?m]dɫG^F&)Y~VdV<,ffWh$f8%1Qi' 5v0Ol9~z/eI#'`S\$Tp'<H. ,I~'ss#J_)UhBgm k=X04%Ԇ Y('܌0hk{6(͹s#h%Zl .Zm3ɶ*CY ǹgըUZ4I2E t0DVEV!} 0śZԋd48<\)=>|XT=L2u(Ы"5L{UfF4^0uoޮ(9HZ2] SiXÞcҐ-(a%=n{x?S{{0o`V=sy+Ls婛 db#42-C$%,1Fɽ)}7Xkh4Nzch{z9hjh^nWs>_u盷oRiZg{l]d,quZv6Sl@@"DHU.]#zK7\_lZa46 ;βWqoZKK,ʴ7x~, ޖ0̆s#:j Nm7tŇ٬W E`uܔF[FOp_g,vgY\qdWz^YY7OꗊSTNJW6]/4e8RxX|S{`iHЧ.f(*x2 JntWvu~_׸b9 6?dQϵ/)";%{QGg'V[N. i! rǬuܚYE6'0߳ˋBo+ >ÛS/}ը)H6Qתzj3O ˩Q3]mXoʠXE51(Q W4󅹤){m=P+ӿ\i@5r%/K!}(散:"7>mmjV~z{yLyvb Wqw`rFzdwt閛ЬF.ڸE^{kb=u^-' /6ܤjNR= /ʰL\Oy;4jyuqnsêvWޗ~s#X7]C} Fs(jF,aA$JޭK':.`rԵ4)[y:ٱ RH[h!O`xv;JԓihA2ExT*y&҄{V^N LgB*ٚllH*${bU?1d ygˍn<0f%6 BG$ΗZY ︂ZB 2I#AwF]xjÍڲӤVLPRRmC)A{R%t<3Zs<eF¬n$렌BPQ9J^9',""" Rxt5mgi9kdEC7õey_ [b-=7JSVҥ5UӊO?y_H̬>$򤸨h53%P 7Q9) ^G˭Ie@w!jink -ipg&!T"&=' Xw*CTt Oy-hxIkl EAjN$eXF 68'欩g^}{j6_ýCPGХ0"\25:aO1($-u$J(ҩ@*cװO,(7"y9:2 wOQ 1s}P4zOftP5O+>]Bg\ZD# r8,DHTwñSZ _-djC:4 kକ:(N0(C,(x !(Aj!0F-iL X)8R$GdЛ#4nb^,08UlޘM)dIHڀ@sJ>K,h`-xk"&LK!TG`J(U6o=@EO-mP'| b̗wG ˢz5vre3pycƔqM~&g?;ԘUsK'PK"i&"a4y"Fk/xQ}zQohE^\ɛY̜Tha 6o ǣ?9à {=:af/<\"z^T-Sc]*v<ԥg|w9 C/z5z׀1_͝IBt=D{i'F*b0-QKV9w$W.y [uW7Jwu哵 SJD`>E]f;\ZZ4RR0y`ɵr))\-:$I3LIPDRBHӘXc&i9׭3O|[i$ࡆo?z$عOJ"Y@+ Wo'qe8QQ|265F4ۥ;D͡;j"Dz N@%$Fց:)bLc2<. $%hc]TN[-:@4 L"i%Y4!-C֚s? uN7݃SۮfID8  D]@EYJTTJ$b(L=piԚ:rί76葀dfQ>jKS$cjz\@NSvGSMNKqY$=rJ 2(-?۷@}i2Kϙ+%L72vDާ QX̦~?I>YU!!B$mP 8 .LK &3z-L,ek. ={ M>^J(e\&N"~ƹFS,ZGS+;mGyLbE/ҟp^ſv7ʷߠFLF{6PYFz!I[ďGG&F:ZKo$>JR~^Vgp@@)*e< ,Eh>a%@t$hb@. X!Vo\=']f{ix/ W43/6ۖ3|kp_ H~4O3eNS tF:4@ؚTnn\C񹶥܇ qR&G g|NõI@$˜V7.Xg0c☔yN<(5 >:4xVY\d:Z῿txt VRGK)4踘*Ù|m|8sܨ-L^B Lb*.EAqOy!oV;PbgWD[ř69f#Ef3Jl 5h_謏‰SV2Q2l9kоvl\KOtu8 ^cCU?j Pu!g+UsZ ëOW}z6OX9!AT *CYUXYցDvˡr.JC HOc QhFoępN'I'4?{Ǎuxit YT.~H!/*5?-ƒȐN*{Kjmi s$rgg? 060z)Ҍ>O8t}n4Gc1L.>sl8Gu,6ÁY5_T] w{;mC֋Z;0wq}Dӿ]L6 95}>pݷ񧾸ٞ`".y.ױDv@ֳۧOC?ޏKћۣ}z81gܞPȑ]k=ߺ_|g;{}e>_|[y۷>pⱛ>\||痛)Wo,Cowſ;fvt|kvN=OA;=7_o٧_yI|s筈%_=ǔqQ{ ySi}w.=͎)@>yu = <{{ϛ~1<L0r?N>`9bp겟 ..wφsdůL/;/-Ù{ٯ]߇ø>ݼ=f/_y=BrtB'{%1{mw{R登0>#=@s#ɇ.OLc/AJ4h䦚=Μυ8 }mtsz+y_z ,}L eS`f37yLijw*a Lr#z<mAݶ\wA>[TtR1;8Zq7a:gU~_pE }G~@vXèWN>vPn;f.mtM8n B7 `XÊݕ6ڃa$-^a0RҕLft̃\?ݶ5*g W 5*73ԨT}tV+κQ!] pDoFW6TdD)};#O]N*p#-2ZetUD\e`U!])pNft,?;+Zv] &jMW(OWçsftJi_jL-EW JqJi9֮+LMWU`liJp@6W䚮֨+`ɐ >jDWJKJ)S*u_e`ПNwXzhTA^+mQM,]An0Np2ڥɕQʢPtتŽ J_|4u1[ѕ2֮+!4]!ҕҕ.OR[Q֦] ktJiSѕRMjZB/gHW ])nVt%U])%榫b RtJp ؊6u5]PW0mS]y3uc*Or#r#Il/@1f[5Irm5|R(9p ( J0,諟w1Ύ!])0jΊ7<R uŎ!Xj`tlFWft/R杼]#>J_%士"~2RK O+ 验+bbc!])pftJiwRFjBH!])pJftJhJ)jbkʀَ7-=vUFv])%Ŧ*OΒ،7&+R\YMW+lHW `g]qDWJRJ MW+U?EtݙS0EsE/AKe%x-)r$6%-%թ7: &h)qO zw.G3AB2GZ$=J߂U{KC ]).{+J>8R"5]PWgJ =QVt\̮jbϯt%Jq=[ѕ;JގґUN}uUR2\2ڸڨ2\U*UjzlC f4+- ])m^WJj#DCRdGW69Wj,EW ])+hԮ+-Aի*PdIWLJqJi)Ԯ+-Tի(EȌ])J)CZ"O޵QN\>|iv(p~b#wf KUgIGg1Ce+Y deKN8;]w%3Ҧ\{,1N'C`$4+MΊ}JߞRW9F`HWi'Њף=]WJs5# ٌ׻lEWJ *eHMWoFWȪϧ.a`2WXm]I.6`.UnzlC 9!] 0/> w-yhvRFY[tt*Be)`DoFW6T+$jZ|dHWftdFWJ}ٹ*d *d0ѕ.OR[UQFnZ({KRlg]q\ѕR-mZ"dr5-V2pgπNs>r#=UN7tR4r9n+hjL'UTioMfN; \$Z T0RTd. /QZл 7q.ҕ]%mJicRJn}5ꊥ *!Ee+u=};#OOYLaa]ƅ7( qYFɕM]qգ>F̦t_|_2\ ft%U+t]O o@|]Zsr{w';^K,&}2wRCfx;??k'miv%EAۣem?nw'}Q9tN޿?}.p ٹ;)U~s}sq%=E?cp[gup~ہO#>9>9~s$Z|qKߟk7}y{ w՞;^'9~xWSo +ߪkVe/,+FQ go_{?l8"ǵy#yߔT#*(g?uħϏ寄6p6wR|QBCmiGL0L ^L)0MɹYIiZ^{7NGHr3K5Έ5K9Y籗.@wiLQZ4HcNE1IܣsЏQAsIg9KkQZg*1;"?b0IE]E9y9ug2GT?<`' 9.ťRj/ <$apZjS6z\$FQͤ'餞5/]FT,CD ɞ5))i f;xpN+@]+(I׆RӌT"yˌ`4ZE=elEX3rVb{U`Qn< p-iЮntf*A5V-X|պ*u%2hFg-K7 5[]J2> :+BDCFƺzzEK05n`T^!LNL+GRga=gA@ UhU%`}5 eo Fԓ*eX؎~n`KU * +5+0  ʄW'@.qNcV$*(k@oB&uT2UWPPcYd2i2yn vEm%b|eY r3J1u2Da(4VH(@ Lh=4v#]7cEfJWuk@A'EGܬnB?%)_JJ A &|B(7ud&t/ …8PP;)p0GRF4U \Y{Rl e ?5WڎՕHދ0 ]Vkx5=+))ElDꌞ4"/z ($dAIh8(Z,  ^ B9TE5p4MdPgm _=b_*"͘u48U ULl^v')Q"/0| ;t&>f.% U-x`]Z|ou/c$>Hp"P+Q˖@ƛԁq@xc#XYiV2M# KrhRUA 0񘊒'8$;্ŗX~Z Tx$ LjyUA|ڄLkptqX1F4/! OݗUe 2inuG-ۀ82UɂN5~d}%TUĝ(6T' O6VEhF/{P$di"YrBUF+xm i,{إ)G;IgY /ѡB-10 zv5Ԙ lcJY %PKc[ AK^ Bk۝2i mB[viX< ( g@ @ B25;9vEQpYAd&#E4Q] =(Djw7qVUp*y˴z2IX5 Dztqu*ip$timl%f k7{'QEŨն[SQ֚BZwkW/߼XVSjoo]ZݾEY__ j}J%n{:% zw:%Xţ.J¥'!\hS_?"?"sC<F>72\=AB9.:!Uҟ \F=v"+mdzpe\0gK9ł̍T Z;\̮$\Y/'WN\eO'^6\̮$\9#vNGA$7ֽ\\G_{Wh0|lJȸ kr_P2 aϬ>zo[,.W+z$93PFv+5.džyakZZ!>얅_$ zWzejժ.ϫf,WWpщjT/ivTcyjaC٘^Vh1 EgIj!gAm~YN|/۷k^u8}I ~c:&E7T@g9^ FxdDamoC8d0*8ݎ '4:,W{q"ܛM0#]o\3f8]u=r q߯ptg"w'57Hjd9VSbwiM״]Hgu^9 u32EOo+|T?kp;p~m_;V緎돯QZ+:&3GWm{#T" oqۿ^Vp{~76/VWtd;ԣ=sLzSw.q6g]##?fOU.\}socO|C7y.]yn?]W/ i{1baψLIׇ׿Ȥ~M?~ O͹7?d~=qgrvWWq&NRds0x-{)؀[iB~Xv`(M &NR8gxs`5{6ly<\;]b9AJR7jڞDOpPe! m>ڕ179Nqe~es>⶷; pouG,#V'>au~D.>L?BBNKMÈ!V.v 8s^Z*7>}1`ݨ25j̞GLL*LãvߺQ7W,|>*69>b5> 'Y(z?LUdТ:Ī@g3t`4C=(Ų%{{eu]"YHԹ̽2aD0W՜ /ZNjW54Odd8^޽=|o DnjkD)Bc8^Хkd] 嚳m5޳؆mv˭g]]Ki.Dn$mRNo*Ch]/Ƞ)o퉏F74}2|{ZdMc6~ܼ idC#~Mw|K#7x4pgޅ (L?r4|Wh7 ,Xs'ש?%aWsT;Bu3P2rqy/CBLw-?pzp5 HC}C(=d PZcJ(:a(v'tD#\͙V#h#"':̑ES9˥r6*vѣ"vFΎ}Cjƣo$C;ոuFnaYV 7_cl/~lR9ÈCCVYckGAT9Sg<ˈV zHH*eg-du(Xvo)E]S2:O9F{/fYZG8.t6;=_5iq[++L&qdnU7I7yL~gev/;ڽjPY7iOuc!md<;A@>sœρIn S]`$ܙԥN[O^{zx)B@ Q aY{bMa68o\ eU^0o"7$'IzyyD U$!BjsTh-sV#gRYㇹϐϵh0}>~**y.Zn q@oק'0ĵe6*"y3J)!5x%4,d4;,i J ddL죀$DE0dHpB&%U[3V#g2ljg y]](y  ȸ)cӋ;&On'iz0 /ߊ5vF,0Ag 'L"gVr-äN(ȶK2-u:-C{1 )ڔ&Q` 1N 3KnLX;1OEkW}E; vY^L @2:9I=L1gBYت0kc A:!d "$Z5,IQ7B%hCjlׇ-(T4b5W#Qvӈ8J!Ѩ]EI-x4Ω"gЌܠ.f\U*ID)4hR" )?Sv١Z:^Oue:ō-=7S.L Xp)IW_2G٥m|:8Ӌ'CSYTX-" JlJazpɢ}aU1J8nBx|֠Ш豉,2+r&15D$B*qډZ"^+vDȽh IJ%bu*p1oAiaMv"Y*+n Q"CW0PbW!~,Q`Zk_0 Z9sN2SҗMNCR1\fBA`ybn3, c$d/Љ“NzeJ@)8ǫ5+gÃ]@9 ɠ ̑ ʹWk¼^\i_KBkPi^^֑){\tq|~f BhTqL&L~vBqsV)n^*g?5B dLѥ%|T.$QDedYbb\"0L2nvB["TN|R1࢔ ޒ5z3$[] mzB^˱J-V~B;_Z$k~yvcګ\-t0|kwpC▻DޝV\kJRJs.d:d#e0Yeu1qVL/:/?ضyYH('V%*;˹%@@"Vq 21gʛ%º[dED0k=H\ $SdΪ&HJU/G c:? Onښv$Nb>'b:H%!JlFD!q.3(-gQn= icH1xK 2 XH@4 Lrp#cY l1$6U`oZxsӺ@x/0u1+>n_<7h0I'\12*$0g] +'AK$[EdcoZО78pw Nh+vsdqp{6&%xy̮;0(⬜|ƀ袶YpK OE8.sn>.V֡~~**y~&˫iN 4ܳ(r~Ǐfhqbhc/ӝ)r{i67^gOw7_fo_6VC\a?wk !V|s pgغ^;ҫg\7X?7kYf$Qh*|4zr7g kgl\7gߩI\FĚ篣q*ʽ:g7 Z 75ެ?^b?~|~,??ӧw?~ą;uH+pXG’ڗ< M~Ԯ4|jS+Qg ˼#hs7o hi$7 /I*9շmĆvMBbtu=?Bd_Nc"I/nRׇTyナt qdCgKmr>؈[OId @2)Ֆg2I i$S@,5Hp(eP!{I/pNp%сVTWPNΪ3aăt"ҿevq*q ^\L Zi鮤4<(J_ r79&⦓ ojs"hB6Z3$ =̓}l,xPuy A3*E0W:VDܒ OVΪsiWm o/ػ;F7hz`T>}aƕ}8sه~sx1}1/ {gl"~o]fڂƈZ+q+L#TffR͓h&>_װ%I\MDhVN\tsl)EgYg? ޘ w9жPY (juH& f q 6!Z l/\p梋/Y1q .j$-nBZ9vёwmKi *V{㺹*vv?*<%)ERNv EYOJXH, 3>3-(O >ҍ^Ҭ%- ^'Bhz"hPݘr)@b8'eVԬG#TRt{Zwc47G[jt7rn;TdRoT463VE.YQx'Qq` F3Rz˞j]ؗCO OG ![G$@M .*JdɹN.ĠzSTpmRR-w< p5w 1ؤɇyέHqs6ukӲ~Aע?1(76 !%0cBdž\Au݅ { D9:/Q+%LZtAy F1EwRGÔJk,&& 椾gkиWzQFf482tgvC;@HȆ&HNt9 xP'cZ|4n*7&0k{ISFfc>EIVq2i{3i<,~}} :볃l1~XÏg? hj`6YLd4xs>C/EvN߃~߂(( ?&.tP-'4] |}f=F|i9(4e컋σ!]w7r* muW7>JUXz7s`2σ-\Fz+ bސWUe9;>-T(16p!~sI~7Z , ߼z}`ݕR~n}h<V!u/%t^>3>zʾ.˻gqvNhzRAdR4ySʢVhmȫں&}6o.DteME Un|c-']U8jt*|(>e\ k9rm V<f縧1c:jq0. NŇa,^.5V'ˠ5-4(nt|9,~/A gr< 4kۛ ^<1FJ*pnq\gwF9Ha}ȗwC .C>񘆋xL3fT©vz/:"1uKλ;ܻN~wo:קGRɵ@Fv >Y,Ml.~NU4𶚩N,V!X)\qj/G_ƙs6- ?'XE51(Qd9^  u ;/2(W,tZ2|ChccTmZ-jMcn:KuDZ gPUObjHuHzȘZpȨ^8A ^hmIד^#=0䣲U1tlٙsU]֯]B㫉R3y3900vQ_}<& "vvk p7PQ1 *&gWIh#.Oxe%ews Uqjj]AVוQ,3OFGZ=-cI:K ˕J Hujݫzcyr:;4M>/ϊZ/hz(k'bbC!m \K,v\ZB IWcmz!me_`gN+Y6A  jC'ug,Fish-M!*!W&q ȱ0cIw/NiNtL>gFmSAHA__"[-j:#  FG ^G˭řJq$26&-ӐrJr.Fh@eXYiI!L:* !7&-o-#q@-Б5m ˂Yf֖pbII'18wJltCGd-T j.ΚH0T[sv:Do#v"Ф|VMwgMHru* MevkwơbhѫZZa1I<#yt$cJrAHFd4.ra=q6XHRh%Bc}mA߮efplrJ`ri= Y* o,}9ɂQ&WD,h =o5E&|w*e) 萣%.x O"^r"B/͉Z+d-XtE9^'.gI*PQ刽^R$Jne=m5(!8 j@gl]!J{۸o65=f6g7yjK=+]mVg-Qj*j4=Z.t.3]+D+E Qj +f1FU [ xDw-m+DEOW/t3tp ]!ZENWR^ ]ILB:DWXvg2p5 ]!ZKNWR~2JPm kB3kWvBIW0rXoX_lR_]`Fe,p^f $^[jEAߑ%e%=J$ч_nF冁b8 J^8ɟk#(d3j%!+8tG|1 2`h'\̦r)Dz]hp EmQ.@^RBQ*8 _Bbq8§GJXjZQoaNZ}Yb$VKQc2lO|F`h R;cKA(MeXUڰE;; tfIj"ۮ"Jjz*>. `-:CW7lĨCW bJW&+hW /)"J{fޱFt`#{d;yo֒g2eS]ў<1C ?Vp)1]+DY Q +Ct`)Bv-t(YOW/8ӢSt@++ ]!Z!NWRtJpB]!` ̥+thj;]!~2" s!BFtnw&V3vBˡ+EŇ LA@!{\ZB̵ (7clcq@' Z*}4 <C WNQ@G;- åˆ 7.MܣMeNHGDE.E 23<3<7Ux\,#6sK*dZc #8txs8l2p2hMaDiM @eXs0tR#ABtƤ]` pa]+DI QͯIOWOBWVHy Mnw&ִުP2Bzfݱ7}#['ۡ`;egtzoSN]`+CWxnvBmՓR[!F5+ths'%=]@I>Ѯ.]+D+T QJ +CtDVp dѶ Qmg'+Ct5+ њ4_jt۟l噩))(|Ӓ( =2E㚷/V%3fTڴ[д1tVͩ7&ٝp;pכvWKuלL>U[7"3&QqP5Q$Bs{BH0:k)C9& MŇtMA65V[luIT~gib6G ͏Lw/9 r3svU/ZV=guճ(Og'<,{@%۔ /m.  #h~Rͫ˯o݆{&?3[~OqqnV)yY% Aknh᲼-N JiܯFg2""xT,a(3*z.jYrtBJk}0Yٿ 4\LucƄB(]VWU$AhsfY8h5GE&ɅS er ^QafiON^Òx4kMkOlB1TZKasY76NCZso.z1٪ _ ,]OarsJ||}9L-yUVeZA󷶚;\8/,;Ф)HVI.' MS:3YGN wec&ד.T?ؠS&e[MPQZ7/f4z EtЏg!H-+jYͤuwۉ"T;P^jYƣzioy—ۅzΈo!<ppώ_˷ˏu[Q6ќt3$,n,=EjLQ&ڙ04_T֬N_Q+\(/e^hý8ƝpaN8_tնZ^}Ѿ< ?wwGCmV؝|!jx#ޫ^c(|: dx>}58GY\2ɢb AP8DO'˚G. J,1t-gu!4Ԯp1'6Nfk'&@j<2O[J7)+.!>]h|zw54WxI 5Fݏ :ILGsnGDRȢGuTcu݅Q1JU:+̒&sJ`pfs!d;#vǐ׀rܿ1q`+j7jn 0m/~qҕgV N*2gI(zeˉ+\B&dr!JtJ ws. z!P\9"%2n:!ʉ53vkJ?+ys9nB1E %#|"diSsd³ *wX{+@îM:q&t,ax3vokJ#_*物3eD+ 9(.);^GnE ?mkƎ0̝~꓀BۊK<~}p}sKIt {QY +KGZl zIn ={w_wp/xWxn;ڤG=cOn9n[>ޕLn]n!_K[ָ[+Z=';2bpl3vų9}YUp88SQ'Z&'|:d2*s*ńI e&>Nd(͡4J{8;{NcنraJN\ O>&A1H3%u}@3=ϦC؟SQUv9jQ aY{Max FrE:*e&wy! >Nҿ5/ "*X!tN e.u]Q}Q՘P4了~ ⬊}9cBu[{&&|)#N*t3|Vy{E6*"}3J)!5x%gY9D^jfSh9FVi, Ӄ%-d$#&,%k2-cgl*$c_[Ȼ3zG[XɴxƧYjox楾,n 0>+Aœq&x2Yɵv~:D .hLwjY4@0Mil2Xfvy<*or̒u[nhKPvgұ[mqGvy^L @2:9i=ؠydBS{ 2y\N-@@ሥ(Fх!k`@5upgl ]x,P,bgE[DyGI1Dʑh^ԉYe012kmmd\"w1C=rZI\U*D)4hR" )?"vvUGX!:;}.]<ō-=7 @pdZwNKA[/,dKhv1;}#+J=m𯪲ʦ$c%k{A.ZُkҫC`Q*USo**=VTYfEΤ1q!8DH-r{DȽx>J22(Tcނš4uEUVDr1%cɕ\oʮ̿nP5PjxgZsD/ Z9sN2SҗMQNCR1\fBA`yRPCrUZ`2q ӽ^c/׻g:ڈ+$@۽0DȘ k}D/ !jQtL6LLR; @*Kdt>\pQJPhoiy AYg#fXX^a)Yp'@43Jn]lW[`tkE1LiYe+i%J v9 e4BCv:(XUfZKQ-Ӆ~͝e{^a|h$AaĜuqTv^ܒF` E Vu` {b[bnu'<&4Ɋa"z<{ +H:ȜUM kB_IĿ||(_jiE8 d9 T(N.jARyĹ Π EQH{7R$/6 ``#9р`cB3Pe)yB37U`o/," 94o)sJx/NhC Š鰄J$=NzMX8F$ϳO CiT /?541L {$[ŞdmChW%S;%('%;yoiL'"O jvo3 "GNa Xiu]o^ 4!I߿+RIasv1;@?6xݻ !fKtvv:<]TI˝vt䵝ZQsfOqb11:qQm֘f{m^_0/م fHՂ?̃vnۥ 餉<<pbl]OVI=]Fƛ, kRDᨱQ4eJ>]ٿO6 0W6:^׎*crXǹ5_FTHOk_W"Fmkj65$o^_8__w?|_~~q޽^V? L0OVcaVϟǀ u?/|@׮yҩVQ&|~uC^}bmmHʷ^ ~J_Q6[nVMBts=!I/ڂf$r+/[HИG>~.C)1moA8E@2)Ֆg2I i$W@,%4HpԐ (eP!{tanF{~;ĠÁהF[aS-a~䬊<kwF'1ײ "N':5NSS+س~{Ss'ߜsg%g[uLy(=NN=Q0L8O"oUC2$ģ$"Vo@ l B.%lmSB.&rV8덗Y s` @.gaMI<PG͵6hI;$I:$q.KH$'2jEѐHTAtt'O;T%e e2X烗1eNp\ҥ4Ev] 3v ǴUtI eB#lHt"H3rI26~<{9X 0R9]koH+? ~? `g.AЏjۈ,y%9$$%IԲeѐMHvUjV)I8D UAMr5VM)F3^EɵR , #sJ3G˸eRoͪlV5Ob.fC gIi:=:\P{ttz>gR>b0'%DbUބ_{YM*/lD<$!;+)%ES"Ddq.}a.P--)\Db$0jL144@1qCVh 0wj-z3|nƚ>yrXk򏄂nw=y$u[[>da~hO? ?@c~) -A7>p=:h3󐣃7'7"8MV "v.oM.ư rx6̾m'0pdp,_6z+ѷEpC 7dwvCcoC?O; 6t7G՞]`f{^Wo^W-EۈJh*{=k gcet1z>}:D:?n-G(ƥY9j!^h k/t_ܻBWy-߻Zŕ{'\c]B$A3([HmT.x  LE{z.ax:,'},3j]޽57 Aah"JZu^Qg{/7>qss$1_U!^C"Γ 4|Y8EN9\rw1VפG4>nx!㜫,IxzQtF)j0ȟM9}'Hw'.g{ή2ELGcЀ7W6{w8a9 Q.'vT lw톯-習;?'fY'L\#d>l56k:B%|Y;t<:{|A#/χg1QϖW+߿sIc4>|8?S~WпbxVǠD- jvIS:#[ΥT(W?Y Cd5S3>-o#R~BqݱezN8Ԝ4MʛٓeZ]"{iMN~%{J{܂qD;N֜lO?uct8kuOyx7Θaͤ&fҜ"eŅfS{]YXcP% TNۓF8Nɞe!病TS\tw?r'Pmhтd&CgʩDU^cL.C,m0F30z8͍ [3[9w* Vqyӳ%-#JMI}ia&RFYGc!NĤZx+u.*(s"*:DHTt 3,!#4ס!W6ӿPfu,[CAruQe%H^ TJg=6txx*/ %_1=1`]l'N^s8q:o1ã12rR(#[c) C%-ˀBԲb54׳t1JG# M!TcgA: #֝Cēfh Z^P͑fϖP@@?KYf@kK %i1hN8qֳbQ lb=:MLH5EgMN`@iif6M1($-u$$&F4ރH%Us~3?6/GPT&8}C\pS=*tI}D(yt 5cJrAHBkH+*h]zIA HDuo8WlNx=fp R \ZOCY*].JJ*3$}LdA(A~-v1:"{G^=0+tV0M) `Q]{bm> (g^=ʩate_ T;Xa`"$;/'!u3dP]v2KHM)ل iHЂMFUy4%D4JQǏ@fL"[{md> ؆VswyJntbmޢ}[uk"_ @ -eNj QG!"tJT1"hR|y&WWp}g]eBUv߹ )02ۇ^w^}n ?GKW8挻_y4ߢ`;U0umvzFYp iL5xlĤO9_`?HCCq܎rꍲwCm(ʡ.a0M0r.zL (q"ŵI@$DdLL1ZޣJ 7?&u&R|B$)}El8[y| BO'mWG%p+k"8ZHͥŘq8CiOlG8s#YL^B Lb=假ZDLFCIhEYe 䘍!x G*Qjо$YXSV2QTg.4]~g3p?v^狹w9Y=*_{Zz\#U͈.Zֺ;% 5dPMJYK(#5dsQqn-&!ܩ!֐6/"P)`4IcrQh1hn,H J4ͼ Ezai` 4%YD\¸8qg\L7-~Rmy$)Y[͒E䎳;WR'R%1TġI,$ 0Axs.AE`2PQ>jKS$ͥĹ#MkhJi"lw$F,y hXB1*ɨV16G_eeg3H'6+Tg0?H 3:9, 풌T;oZ/)7֫fbԓ/ð_:Fs3ZDƺ8 l$QHI:@StO*E@M{{HC==o]֕,C[ypbPHˁ b+/[Oщ ׆(S pZQ4+cM{č-7wVíªH-%T*ZR"D4k Bb^Ssj6qzZ+o3 @tZp9ߵ1x ], c7R܎C~Zh9@c~0~w$4܅< nM~H(2l4 TǷh&s#U?n'!vVv!dp,_}G]zָ+CoMnmomznj>jʫݞM5[5xb</Pzj]Eˇ^?]moG+M/[X\uAZH퐔 ߯zF2IH&Fdkجyn({{puqژ]M j $l'pJzDW DWuJ,ji{Wƈ' tZ͟B1]=tuhcM4=*/tjuš"ztE4OtMo*/t625:JW @]=CZhzDW Wɾ!uJ(U UBK;OW ecqxCWƩG /]`MCW .}wJ!zt=OGFXss ]v3oy8~lU9fu "erzh_y"+'A,s 3NIX5Q7!4RljN> &S1(HtWfc>fqhT Ǫ/d:+tiȝ!A;>D̫/+XWw, SV,ʹ9q'yM?bp 3J@ԯ?Nݫ`C/;ZgLrJ͙"Irf%T-qnmF{5Z)IVKy$"S4aߧ {u.VNdXn ri, "߇u&GNo,$JtM(jBuTn1f?3)QէIy\]Y%JY Mo~=:9wx=Nvaonl?޻+Gv4 #LQ d>̲a,-@ulb,"G'Gkq0E4TIp@VT ePǟ*-:֣J[ZRԡ]Z[U &ɮ"PJFJ-D#&<')5 6W%oW5u4 Mzram7햭ej K;hvmy$i=%~amj _-oBI=C_MQA]`eo*%YOIh:]%bH{tj/ gU{3g$lغ߱1bɧV 0'OMWO|~hʻ]=tuhcƐf=+ـmUWV$tJ+ո`4y_,r-,”Ӥaξ?:4 64@uVnٴ&&p諏Ro|&TY{yO7Xu]:v0υ;/e)NQ Yع&"|qUhV#NQoV{r~1Oxiā،j 8h2e3B1سCVUxĞ oD+,EfAlv1y>Ւ4SCQCi4Me+Usi#}|شAr*̲K3*UMdz!օ;&Bȑ CsSl qrrkƝ஼%2)On=_0]貅B=Q\"]Ok8EwNޚM+ݖC}rQ _Vlvf׵M7gguO<wZc)@CJ&1>(e#aMp؉T0R LX[`1V+E!ZleF͝QFG2mhCe n  8qCyYokEc:W E ƾ31h̨zSz`RyĒP!kxΕ&^:)ʇ ()v@IEJ*P2(a1Js~'yu ; V m'gl 7m߷B-qq͸{*8SG\j)8JDBKpKCy;ZJUkzr# Ru+"?b59!Ew%!S[pa|Z{Gc4,^35A! P+h(Gk#'8>jK 2xӼ=D 58w3c{Jk\ؚfʅ].\Fl<wLm^Mbd}r3M>+glX" Ĝ"(ؑhLR)Wq"pxM !{D%$ڤB:vh/#vHit`ZD"rRحd!fWX58IˬMrAׄ[s7VA2eC"Ʈ0bkq(#Җ806NPF\"Q*EC4r.EGYnV!o"pFBb%xz ",iPA IF~:2#nFx-t'Ksiɡ[E6[g$X8$Eĉ`yGQpi Q ^|^|g[q(PƇ@a-n2Wn[ܐTُ3գs=`E{sgI_ɮPb5 $)7O9_=59He!L{)c.OƣyHZA-@yai?o;JE -aSHK8"^޲ȰD)F|#0#oPTwbn~/yr#x )EH-D2NM!TDyHcVvKp3"o&!)`Hn؈baHFxu|}ƜSC}ю?pB2_.O{mz1n,v UICսy_oJ٭ rX8\7 ]>X<(]҆AwQ,\Ŷ^+k)LH&BP5; UTsc%bDQ3kMSz[}Hw=w:ZZj: AkcXf"$O m`Qt*Wơx jx!L+mH_evH}uAl/>%aG1_pHQ/Q#vƀ$]SU*:Th֭N\&2F"'Npolg<T&#)Zm-m6#ozWv252%]4g~4 ê X>ĚvarG7Eg6[YF s0*IACBRʸ%#=(H$m1Kå_ub_smٞ彜Dž'b`ZZr$N4NPD2V`TDCA@Re-捭 X12nUɷ y{%u+PA3@[1IGE$iTGCbRn}ջ =(~~~:4}I[LO8u10͎I?'s" {}?ޅQѹ N=ߧt5̮T>7-yeq",K4(}F9~ZP@vF!h}Á.FcTgгe}m(N g`=Ew.J6\/pBIO3A< f"}\y?Eq*ER*M*F2VpY7M`keц 9JU#QqŴyv+7/+ 'l 1(bN̙Otn Zɸ<8uLj$-W$V Vca bQl]u]&zt9f;^p2~Cuja/U(;a&#y`0'#_ȞCz@~MUJ&t'>L;=?Ev7xǿ~)3/ݫ g` U$,]In;pc_tMSCx[ -XO]O6&yŸ7Gq+kk%@?Mxޏ//f']rjU͠+~,$lz$;*BEEտSe| ,V* 4VF&ne'ѽ-s}I$r™RDM! 1xPF)pKhjv8T^YIϟt`ˢ/Z!2-j$EtȾƓIO6~׽b\{ijV;c4_țhΨPO$Qfb)PyJDS.s\ 6x Ya$Idʱ7HZ.mCʁDC 5,O81` 6mx J@tQLr1?`+eL1 p/%{=ڨHo G7  &_LQ,7&QlL&dNMLӕ#҆쩰 ,;q Dt GX,)d8sFf0`uA+,!,CȳGiQ$aB0e! 9.H\s {,eFf_>V ^f@1Ⱦ9wܥP%Yis䘳^J캓>Nq7\n(##gk7s~H).2'r)&n3,>nbҀ #%F/.-dh}@ :2蝟c3V)shIMrU$RP:xë(Zj]PA4['`#}B#n3qC,ÆӇ-hZܬHɆYsIz7"9-¢c Qt=gR0-qaՄ_zYI*/l 8y HCO<$NM0.}.;\ZZ4RR0y`5 q.]AIRm4QBH@bl.cVt k=D|ǐrxs5k}g?$|l#AΉ4UUzG'|xqZH6Dd{t[sO1|c/n9wI7)ZjTF@K@H@'F0-bcʥ0*S9)J@WW};}+lMQ᫖oɞ67k*y)M:UdPt lT`%O5Ѯ2/_*V1M|ƁqIZe΢8\iaѭiF ت<ߛA|=jεABV8H6}Lb-&*tl4_92@/ɵVP9yeIGPQLqPhcR D^3ԛN桕~/иWzZmG!MD 5mufa[^jBg/A\* ZQqgP:wr $^ι  V^t9 !t~| 9%{Vx2ٳ⋢j ZRhnJ-sōGŇ/'̽'fx18A+Ai]qÇ/;(yG7K}4:(AhQ2h!p}vSv}&uPSq&/MD|!EbB>'^7\,4t>吤WE篫8` YPEiqZwAS H1"_35!C>՝!\;«FyN߫@ӕuní\魞rC,[r?MH[|S`[lEx[J|ۼׇrҳ!F텙ɹ#C[ Yk0~`[?睳ix38gHs̝v?:E5eu'M/ wsΗ`MT *ҳ2%JNxU+KB<*O_ k֜+=QY Cd%S3>mT+H܌Yrv\L+{Tfٟz&U\U&(+㦆ntf08'R݂vD낱l".Fc8$m {iksv1w Ey[^=ȸwlmj{jm܆3IhKϒ뭘fƆyΰMNx3MČ.sk.n;ͺn&%7hWoP=s3uO/8o6C\QzC:)Wcc_fM)hą{8<Њ3 * R{N_Kb$i5 `2=T@J/&q%΅@cKc|kD_b` $hϜVʳd6d>p\вeVfuPFk!(TB% p+ט|h""b $/G_s*'O4Efu>ՃԃZ.w(Gf6$ryGyQU'dFdC2ƣyݨ~e_`rQQG(Ip,gJ}yѳ/UƓ4:%x\2 ^ (YvfO'y>-OgGEe!.Y7̎ϟS~UŴN}۟ 0{QyuOTvpXdzDH l{}I ;<( \Y/P/ϓz%1Vmfz5Vk !H+H~0.z/6OI_ I+ȑDJ-`~UUU5\k4x_%j)` spd,%s6GhEdMnАN)rK߳Ȭ23mv\_@3rgw,>\9|kc7cs[a6XQqt |Uys&%r0bUa 3q6fH6.] ,@AQM%Ta>~FZL*T9# m !Ӗ%e `z f;b {Za[ -0M0r.zL (q"|pk "Iɘ%o+)QCW zaUًIpl+NtbDڶﵫ5.yw46h3ۗ.*Zhe##FdLrDiI% =D8s+YǙ3B@2#=假ZDLFCI&]5"vmrFHg<#( 5_謏‰Kr)+xjg&&ij GS7מ^$/3$HU[y.;:1^*:^ƹTxzK<-*qUKQ D"*8VaRk(" &76KPJGmiDvC[lF?+%TZ \-w7CB5Uj_~0e\&F"~ƹw5h#|D\)N,ZGS+mgN+9;iMm /|ٴ޴28}0l.Ă-žy!Ia.WĨSGozj*#꤭:oБ V@ED *1|P^ITʉu6"(5>FDJ"B8KMtJF T[k4ODirƽ)$|Dg&?{=|ח WW %mn@z- lнC{Ci-Kl{{toRJI]QWY\E]eiwu7dn4.igѓ;U흟tϡI6-DT C:Z9 :8]f1Sc%(ipPTqOt>> i=:k[:/0M0r.zL (q"xB%Z$H0&&D@.%eKuI뤭M$BFcBbkô҄݁;5t_c0Xq"U@v$ؠ]FOR̦ ӺAG/1$1kC| %6p++V"8ZHťA |<}Ù[̭digrfTq) {1x މLL#*δ1)B6#T4Ԡ~I3.> 'dSV2Q2lMqD/[S7מ.^p 'rtW熪G@lNAL/W53ʪ+Q l=,ۉ&Kd)x Q"1qe^/uժބ X>XյZzKYڷw%ajժ@Ʉ:xC-@rޢh@sn:Q"u;˝>GdҺqR&т D3B:黽ܞv>iՂDM'ﶣz G0ݠv;+{ͩ#8|fp>tWEnw"*+bAYLm#uӂ;IdO|жu\)>ΩmϘM55ƩS-/5H:EdӔ8!93kuڜ!p*Fw*E-_'ݑ}r(z՜$C#D6h$__C֩0#Z7׎w\7v:ۊ#u|Լi(?}B?Yͽehxx;wWyp*Ʈܮwpգu4+B)oPlެpݎp+TֆU-]:kEed_ \¬:zip'`ZЬtUSz}؝ʬ^uެ+Ē&ujuz ?\As%χa6o2Q!o7q֛Gzcs^Á1Onʵ",m2;U1IT{EE:JL ׮gNϯ< :og7ݫظf^veO9;u $j]̅b \?dèkaz$\i'8Y&'9SZA/ G3A84S0aGrt)Gn4ku{4>+[MM58R E; Vyku{SoC,^\Ru:.BN>K \H&=I}&:9 D\Fk 9 WN4}a +.p jGB&;M^_k7տ;7_L.gK,f\/^_4}]9Hpq3"_ެs d՛\&!zkX5#.~a~vѣwv5S|+#:m䪷lN匲k$X0oFR'{/_鱺RĚJΉʺv't8ӏ>ߞ?3?{` !saNWm+¿v@ W?j֫}Ws#6x`mz᛼ٻHn$W 9# h,lYBWURJuuJr6RbfF< \[tϮ;s>[Gߏejވ}yg=dY)9P4-,"7Y:_x{Uܪh[y b7"m"؛KxqVYѡX%rEt1f!޶*FmA(Pj,Nqc8@0zϯ8)}N T% qzw l5Xe 4e^g]iזܱn6qaAyoAyǹ uس>HԝI1F|ԝ6k` xNaSjȔ -TOI<#R $kjƓmH҆h[T']G["[I@YD%ijG#tr9_P#g+nRއPmf mb:) }]ĥ4Gz(}Lm/׌!RA)vZQ91TCBCݚ2쩳 [œeZb*z1֐D.$ݘ29C8˪lէ̳'pCJ^y6`|0JU8vUE[鶔L(x,B1{3 !0j9< d^pYІnh`Ny|0>'ޕM9 jcUC0E?Du0*F NZ^pEC:=٫w a^t"}hVr*IT|cJblFa5B$H_cv/@Ǽoybwz:Wtz"@CM/OpFc=,>ůXVʖ=0T02fB4-ybY$s##E9ԊI`u !e9ͱ˗dcI8wUT)>_R@>)A rS>C '-rU ~HR1U4CU^!Q|FVi/P#׊t 8 A 儨wvn͹Q˻&7e=}f';#&F?9x{ʷWlR`FW"%`D*β Nٮ֜kJ*8N j)T1טL"'nUR[snG~\v]PB',|P,rttܘ뫛xGa/k|ovh8i1|GFlrjAӗe3Y g0`HCW [AC (eS24شCSX%S! O\B5vN|.ffBdj楬"cL!GbtbJ˯t58T;UQ"4H,PDl ؛b4)(JXԻ}܎cPROW\0 "v]vFD;!℈7.fopMJ")K/r9Ĥ j|ĠrEcmFRp&/^-YbHh u ?wo!}W'4JvEa oBg],U ltQxFܻrVF W1eWZuŝpqo;{YǮx; :ʞ?i%{SܔDُ^;eg?|Z~-β]J*e@CXc9RVC5bbPU:%B>L"$NwUf^$BfUK8 13kASvrhwQ""P.} ,}jq=5cKx\9N-}7H-(BU2)S4\t1'KubPKUYT°}HđHҘ"0d*&6[snwSֵeu/wu~]~ŝn۱tIdp2~֮V}sz1fTFS2ŮOV4*,:DZ% )08oBF-)R}H52{Pu ]iv)xB 6?t&,PTA[Qnmjry>]?(}vcHɂ'Pim4޷g&Z"<ltPJv`BaW#37=Ԙ;DhB;= a!rc{?nǠc*7P>nƟo_mD.o_}+|MWysWܧN)|{ ỶxU\~WkxV¹)r$~1++-kWģυvo7z-q/F*_"8'iX(ۮyn:,|X8tC#6l̾|5v1gnns8ɋ::ۤA1Gǫzo/f<^mT L\b4+_fb_WMbitl=}W[#.r;S$S=[( 2Hęg&A+۷w4#ߩ-A+D9촚;'S{XGAڂ!<_j_K| D]8{r|M>k@HQh QkLGAsSݚs(Tv)/˼nJ<+{T!<]5S, 1F1CJ5eDr@ s1Dm[K1BAnn;\{1ܗElb$Aogw +%ae0I[f罥WڊZbMե\ 3)~8""m;^eˋޛLd $mE0Ar`S$$qV}$Ԣ<9`r~qJy7 ;:W8zn+02T D˝$$LqO Hn5huSbF9b* !'E?/C ucKeNJp+F&IVKv6 ^Z638+(Q*@T͊V/1XF[xwԃKYPH! `Ӑ<8 @fRR]wG~{Գ2EdM"?TbESuH>;|& }+>y}} i{.]M!Te|f,/xqt(IC,˹#h8o-g1|M,{>xՒn9l~T1*Y|1{=og]}Q'˧}\mSK\ l~4We;-FW{w8lwkVw!motҶĶnפ_o;=g5%mp\p>--~Ei^gca㆚Mb{?we>7{i%ᛏg{ZkmȮ0!Azܪ[%H^ck$XsF$*d{bϹ$M񿛒eؙUkLдoU6ٟ󿇓[G,+?T~zQ'+_SuCЋyΞLϤ7u}6;nAz*-/47%E_ӺzvuYfz܁Yt[ o?;8ep~q >ï >1$lu@Jmq2sE̗fSH~rv;tsc@@lc0ѠU]G~ϲݷ䌖IG:R-2Tu1T*d%ӼLsoQ wlf!mVJ`׽Y[>32\R6d-R,5T:VHO,Wӝ{k M&>| -x%Yxa3f֊?S.<3O@ ?qh7qB%b+ dr9"dg0Kn8Z.8jy.X,"sK &ıy+.'ifyq=zco7EZpu&.ZИ,O~:ի \w0~syuq6_.0?N#W˻VڛG|95N̮.ޑ ^|w 1+kXJВu7"ցFgdqūrs.LB2->M1 ^W'puU-ׄPt6"lPo>3*+BViTe1 b2sV/Qd9>vMvU&rWzc:Yɏx;LB[O98%iSME?u9AHp0c⮋^AwE/#ՈJKZ]P=U׈J ] caJP}u~DthJѨ+AkwSlRER*<;]mgzt`6Cno@W|vɏpeBWV]+Ayc@WCWV)GCWׅЕet%(C8wn >] hGCW ] Zt%(;ҕS1%؍M2(hu7t?tUqL ЕIsNWcW{IWF3]]ߘ#,֐Qh?(|vҮ1.me\&\?W7T̲::Ңz1덂Kz4)8ƙ\dnǘ/{KpW_k4Q&PWýWk3b*v5)1> R͔6#lx\KcÂ֙]ÂۃC1Q 5 Ok[t%pYmy'@n̎9U^]3"pi4J>\+Á> R@l; nx]6C\j#4]]=AG]+4:]U@W{HW@vDtQЕa,t%hI:] JC*Z=w35w3jL1 )ew7Kn,t%hlttljLJ]Aw~ !}H:p]p4IAt%(tUgǗJ66z6Fdtt{]ة;Օ n=;TLe^L~9>9{BSey*8~ᇕQȻ}sNa@վ\ԲRbYg{ׄ"ŇnPx]E8 yI''.S>io+Qί?:(u*kT"L痳 ˺ {'ҧXCW:^[\+r|fWDBX.ק/{w+[^Y5Jod#@mZ_gz-_u2ؒCIz>ؚ&k#yrz]>$Np(+/͏ 1^WsN|n2E5QYm^åMO(u5R =3prN',i"]] Λ뇚92Ь1FTU*6s5Z- VMUml.HSZ䔔^{;,ZØ)ټ՗K[ 1tQwp9,xT5 P&sGWa<>Urihx9!gB+ oTh]##G+# 4⢊@>M?TڻUVC*gQ^fṵ 99rX|x`'̅85ys*U˝9 S $]۪T$ף(]>]wFǰM>,QMѷ5l',Т+)$8:AOhkc6CzQimDj GMVCHaQU-1%Bb ![뚯͸\"FUi,IոXl)6.hC2+hlnƢ<7% ̜ÊOubKoe DGkOM#\Kͺ#nG0L#mTbˌP4\ H`m%Έ- O9@EE+h-i氾mmVV @Rܪʃ:}HuhluXLՕ!pąd*B9:vLWgjm AP)UkѰ֞'<‡*T)Jp\;wMBAQVe/z2 yA16vx X+UlX&0VjNddb}A xPt&X( r M餻PJ*T[!z@g3 dAU %u%fH57]\?1$u0޺ Db BHPEX&T "=+FC(d9AςGV Q!6KQ0q)Q2)ԙ@sB:CAn7uD&/E65@4TCAN)(J8A1n,V[#)d~GBZ]m ()e$l[dW>#30H]ZX5UQR@}JI6r 9**Fh_ʔ<[РˠDt0Pll,@G@H $y;+aMḧ kE~Bq>hҰA;"tڦւ'-A7 / br*} YeT2)3$]I,T*#d`S`xc$_UH3{م՜  DLk [ Pk0tq,1h< = KH`̉|$H֠ix< #7|uXduU% 9ՠjK0-VwN52 6dB^ Hl҇a|W}wq4\t詈e}2 ]{ #AKxu9-C^l5AJqyr ::8lk ) 2I-`aF !nK䔐v%)%ڱ, +P(pPC`ID[n%˽eo;Vs`Н) @GaJ-E10LMUQ2P} C VA;39\F [a!dURH3A7yP&BE:QbT^ xM[j!jˢ9hY#YA J᭻Cj*Zk'ުjE`!-#!m5mUn%P`]Ⱦ{lZUSrcV6va~{ytOۯ`t2]ܦښsd&Y8P`0uh36y 6ٻFn,W|)`v`|XlI$i lk,KIvw˪VIjlK VU.9;Fhj_?@5uP,\0y Px&ST@ YZcOarV=c%~gBe*H '-[a^a0(*bژ@:m$Ly#4+0 \ c'!G2bZݳVi,x {BЎ$=p \s80r=H@JXz l$fH>RY3I:Lڐp"=t2^)OஹwvY)ƠLJS]YxQJp"ڔ#= Ag-i /m,@m,FRrKYG՘聯t z ӡ@;9>d8=|5zr> 8˲ 8BK o>< Lnc4F^gQK-&l0;zr@&`$|wݮ ({R&,]v2*QVW8w"#x"`0n*z/NVzox$TDZ gHe%hgʂYo9_7g0U։ce9ˊlb߭?|(b㤁4`AI;tz=htLc\Pb,F/6)7zbe\~Pbq;)@pzy)i1owfe3$rzfiލGPǺŠøk dyeBM{b*&~KA{ d`*a:%-l@`'+5*ޣHP:J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%7&H(=#%P1XQs-?%XK9u%P㉋R =|@,*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@߮zNJ E̵l@a=y%X *ޣB%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T_%sRꌔ@`gk8y%gK%&TqT@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*P J T@B%*ޏzvL هwNVRz{}\n|Rf L&g@4p |K`bg#\k:yXi$ ރpAQD5g z$\Xkɟ&W=+Ř `.;n0XW(Oە& }roSCM1"/%$|7 ~凿fEs+x?ȊҊ%_2v$f5]?z5[uEIs7~{D&0wxa8hO(wk2$*8 ?ZT5}28W@䫏Z}9TR}pU٫+(#8өJ٢#֭VuNοLBp-(ˁӒvNXw2'Q6QV\@l奷Uf%"< #}#rղe!KL*ι.檈\ ጸv(9]6:G5r٪TU|(MZ8x[GFq.U*FHh2i/ӟK.(ЍGx837 :O_jlJ;R/9KK|>v7[~ꌫ\.kKQ59gzIdf)!lPSxUN\v!QbI(+;kG*֫l)Khqe}de`7M^.3?͛ ss},N_V?z7}蟋&OyfϺ)?WraMtBڨVe.jGIe֏p] L󋸻i{77p;z+ Ms?Z'G f}m&VN.u.2 JWjlgڲo|ãCsuk[fղWaZth`i@-/6I0jdžaF&\2 E[eD%2WJYKz;ƾ$ȣ'=5逼'8޲ͺ p*,b:墎T S*Fց:)ʖL(4uԩ^qg;{ĝsw.u(|p'D胨Qda>>(UCݯ˥U*%M%K,8xH@+J$Ǖϔ,H٥D&*yw8DZқ9Jr:#%TZbI {-E–b(IPJXU,:Wf@eU1qZc1f()2*q'S|iʒIu{#RNazR9ҐQ Y 4;UmwkIhgܯ m`(S8pg wZ!e*[6\Rgh2_jgv*m(M`Z4o*M L`NE?`4|]Mߴͯd7Y]tzYKcvCHBo~!?b CvUcPKHyNg>7(бrں˾u|ե7bMW9϶페3Ċy#*LZ|4bއ$33>V\9# L:D( q e>)s ϵ'!2Ly< qRfGL] <$ L%Z3p`j͜@ՖFe[;5l: v vRF nMmo1ͫ&{nbg ̵Kb-X8fz,aΈopy kNH4/eU壣Iے`l)a]Tm 33U:E*J DjL$0!1?8[ q}l7s\NR*RNK&AB4.47^&9I :h rҮŤUt/L#00k$Lo||W $߾Sfy=g,,'2ήajf -ǙƈI*r`&ҡ*Fd_z3߲YVGm ~ap`8`qЧsP55Ʃ*n5H>_p•[4'FڬՐd4c6Q-BY-JvHM:g tyzYlາ]ku%#D/Mu*E_0*Uu1r;u*Ʈ6~۪[@|;m]?UȃTQaQg]eYkڼdvPխ+QER  :#]uUssN ]ݳ?T:_.Ϊ>*y}%oܹ?̦=-1}q[o=Pp3=ܼr6jGS/o=Vرw{^sq^lj-ޡlapa'7.\m}J>|{_#W@+I2k+NJ1#q})'/;O{©tLR%wԳx.d2dV\2kzqb92y̻Pٕ3v6ox\va|[\5lDן'4 Fϵ011lI0H:ϕܔѥP~,Z*7dku g%>QM`&'GfLp40r 4}sxY8.=wY];qs)*+Y }{8WCVR69Su;v 4L8@յ}#cw&ZIl6UPiY T-_[ vRSOb/-dK2'$=!2S9S@G!VdC]:˂^dA524"ƹeO-oG59d6 [Zj3/?2 tRRb 'f*Tں LDɴN9b6;ojО^fA OtLTF':t5st?s,5a= w}#$1˶YNmTXUigl%TR1lGS}f$cFUP< /hY~[bW'a9M}csώ %ot,~ ɥl-Slg2P`xp+Dp}R𠢷G?Iμ;xzy2u aGoh¼V2B/cdyJ=KCte~A\%w!Tfhc ʳ7:Bԓ{YtONh!ḻya,}Y*YfÕ4.db^(d)S$֩Baw~ʛ9s.~㰣YeI5@RDp1d.$;/]6'#9sg1K5\O Vi-9 蠳6T82X񁉖|Leh/lq_;Agٻ߶erJgEHp۞io?Y[LRR7~gIJ~I-ӱ2@lY\39 D9Cz'PA3:ނrArh*/"Ł{W&H)WR .j%1;;Qr>J3 &h H .mrmgr#vӎ"#OL!^Q=J6L;涹0_)?W< x:8o?Zh[Mk+{N`^L7i I]xea..~o=jX^i|. Myuq"޴)]Jbݣ~:Ԡ12Qv;Oy8pu g~|~P`lQ@Jz&XDXODȍ8!p$EP37n"o>ʰգ \+Nq *po+Ե.ٽ6n+/ _.G5Ġ(1gQƣvnKdż<j5c^B6Ƒ\?٦aa8efy4s!uɉ&|9ŘSoG]>Q7>rX%7Y]/jįTdOsz@IycM[:UrFSd㯏zu__˯_Sf}t嗸g`!D%ڕF|7{ m[͍Ђ݄e\#oHu1ncЎjmm7?W: 4Ejv=fŪV(+]X-H6YVvT뢡TY&/B8 !vSzKsvƸFMM泇<$9LiiGM! Jc< ZFA028w% 4 vv8,!l}t:|z4G(f:hV F O~F@x5&Ù>I=S|1} n7k w4\ޣj^{S ݦkx֨UCӄ1"] =D%a&DW6E"8% O#GQ2i NDy:H4Ty R3^*K40a Y{Th"14X݀$(σ˛|1EmcDQ31hL95E2}荜 V&1124yN`=dq%€q 2֯>D=췯O;"!)IqA⪗pH͕gRfPPQ׏hMZN't Lq6C {"ǜҌ_,aRȝT8ţp#PF;yGL׃o 62M/.2'r)&2>9~4 ;gHv˷,s} -z8Z4ᣱ Z0Qbh獿wFM\V/"aW:L`b˝͇={Ǯ7rcw+dю]Ǻ{`)JO'Cvꄏ.xiHmI8C)gLk&Ț&Erge MTUPAmY>LJ@dĒSԀ($|,F &&,]P$Q h~Tf_}z/vɎv ;fw @݆mzkos @wyc]m)[zJEܼ$%%&h•X9⥌V0imaYNp4b;i$"2:Ř'F:\LL2&cE(HhQLp Aod썜؟b! 9QpEQ~"k*B>:tO~۠txT?svRgڡͩ(ʙTri Hca9IMd O!{.2½geJˢvXԓ,R|ؽL}A޸cWf=6P{`oxh@͋pc@ڨFhlPԧdd.hAIZȨRVtk"0$y9{q>&I@s=aolƨ_V/슈gD"xk`!yJFYp ')j"!J)8IiʣhU"@Q6$;M>9w8p =b]vµO޸dW\=pq[=ZZPQ )aw 8Ԗk]}AL&lj Cΰm|)%ES"$o2.}sg"g TKK=5FJj&x]U^:ǂ6,ؼ48G-$KCd{t>><(={wYΙ|Rsпk"Q;R5PA2-ƔKaTBS9)J0eyI@Ꞷ\*g8P ڒƹt}Woo| r:q,JXoZoG0s\gP76.k~*~Ξ~-M% e*nU2w:G!X*:SMQ!r0 uJ Ѐjq9JD&0`(\r 9JKs$TUQh"9͚0RTc4g3%CC`T uoWYolG? z{6GSQE5W`,)«e7-U<ջrY|6ɣdE^ks5{4MrU%iіq;reFLf8@FsCG\[@=?ý.)m93 >&1S:\`$;&2"l\`0KT~oj;G[vgp[sxŋ⃢j ZRhnJ-QNKV2娎1CGMa;_gQϙ,yLZV-/lޅ$%dFʲz&M\MKP6M\}d27R݂vD5`-gs9H^,!YGw[(KͪzKW<ޓ.`:` 5|dS&k0]um݄15u&M;e{޼ӯ чd fnrѦ"n-U$K"l'׈cF?wB^俧jF>dgFj6 ؖ9=d&]xn֕?OgZ}C諯MCWZOhڮ`Xj~Ї֏ٺ!]wn/~}5{c|;:7> ˎz8t=F?@.p?&v)F;Ch1@FG$#"7,A:{PaZНk[|}9t=孷Ws?Td>2&^~E4faۥRǷGSwF-BH#yJd]Vi'cϷ Cz-p z=C%8$Lv `HƢId:c˷ !*N~K?BLg2;hHh\c$C߻ 4q/7-#gG<[WN1?訒Np6ztnh) d#qij]qceHh`vv48)˧oEgeq?Zۆũa薟ږ'>ء,5:ZտI>-{gFRj z0fb[ &>WBoZꉕ}q&8"'ڳGUn!G\]o|il0N=zŒ!V%I}8uv4->vXGDH. 7t1űEJ;v8=,$[D=>z+5ф6q>kJ*-wq"-q͍\X*Pf,} (8eHH]BBM6)-Ѫ KMrjnO>y@WShhk&7Mk-=ro{l˷/ú>h2ǁ<.%G۵O@7$24wc?LdUj?0v6k=JV>~,Oz59]?B7J-+}9^^' :$:]}7k{߿?˫_/cKgy:e+W5.q<7zq<@Cm]ZI[uԴӥT3T!$_8LD/x}XGڔ `MtjWcCbD5${Fq輛4} Dkqх}mz8n G qfa+3{ ѥ?؏@7 :Ĺ|6 lE J<[_s mbn1~`=`ȅ#_:3zw㱇_k%-ݍoso%Rv^uf#?}ۋ7//>஡]Z?yf#3|f'jbEtŽq;o60ۻ)EA8z5iJ(TtnM8Lzn^]eF72i͢LJ*+rfw]m^[uO[Oqou<]10٨FWEWLk +LXu@]9 "]9ƒ] #-ZJ(Z<%R+v'ܠ&KוPz3D]!*+Lѕբ+ GWB%ժ*@4v[5\P] -?.To+g|o7/֥ͺW7]v`t%\EmlAo. g-ty0G(rہ7e].pue/O^ow_mNmWD> $/9uD lj04ۓC:d;PۓMX>|ywY|Hߺh+v{ջyԷAs}|+kwb7?8Xzw꾛zY?Ci]-aj!jwn ֔a})5\WL}]u]Kt% Q6+Pu@]dD,] .-b{+t5Z 4]1Up]Т+Tu] pԣ+ƍF֧u%骺z]I!#u0ހ^~ǥ?[)ղBmYErĄf+^XX'Wz9 ^MFM3# fߜ3׫;Z,~fD(CYb0"]1przL m(](Su(DIW<{پ<\T] mtJ(KKItEDijt%Q6u%5Mpj>]WYsřW]Q]Ajߦw&kJ1ѕҢ+Mt]15j&AҤ+FGW F6u%U]-PW`-HW 6ѕz5ѕІ+% ->DbJוEWBK/W՛%*XJ>(jt%JhjJZO ;UWӬmm _WӔX")9O95qڭe&3 7O|pmo[M<{<ܹ dƹ QnmmSa tɀ] -Z_횲j"GH=3 GWḂJ()~:O.̿ %yaysefQ-Z'f mz>$Eν B麒땡JF(;JpmԢ+t] %j>E t I7ӒJ(ZdXEgJp)hӢu%+LQ<ѕ࢚vt] %au%jjd!]y&`;Qq-~?'os%:w8 w-`mO' cDľi]8CPe 5^QVF ʱV~cyK'~ҩ5F#-8n468E!<?Ec^hɕ3 #`48p3@*zsOccuxFW j+H2T]-QWD^S.'FWKjtESV]}: ;6}85uNåoshq3%Q2tmzP8>vKʆ+ҲeDW."]1p=&JhJוP6vUu$#:Eb] .] m,^WBYZI䪫'd銁-E5bB(]WBYuH]aOQxb]1n^Z&}Zl~lmY9ܙD#If7ZфݬU*䝴 ʫMakNvk&et|ˣ(e`tz^}JG zMAPAw!(3a+fHQh;p] eR\V] nP_"1+ܪ&Qu]ADM''{Wpi+f~:;6}<5_Wy0wY<ܹohL; Qt3tmzgQxp iPJ+ak8^7-5 ?gggo'>~lˏHwi^Lm- C@;bA'ߵ}C}fW3ww@?JG뫫W//ggD3< KdM33_[~v?%p]0hzo1؉7>M7z57PM(^ħ8+N7y-u%m \u$BkYEb8ys/ͣ t] e*XANFOt%NvM(6ZZ[Y%;4ĶHMeEڲ+eɃݚ/lŲ&a\|LXhjgfqjZKc  'ǿ"] pԓ$IMZ%:Srk٪芜K) \I] +>D(|:J;6}:{ͧq]sό3#yq&]QRa3#)CWjߦ. t%jt%s/ ΣMt] %ykJѕ&EWBKt]1eqcWUWOV禱'CS\kt s>gcg{8W!@pwHr{v/ƹ6j0Er9wCL(?(sqms8`h>X].vkFnsrsˏ79X_y=N7ֻEZO܃W# 5Hxԯ|/ީIpS^&j?9R襋\Xc.J 2ft Q6j+332x5$T\)kJx~2\d)E9)mV)1(ųs0DϘ ,l*(0VrEGFo"e\z`C6 ]7ĭ?#1X [IM:m\X[l$p~/r5NtnNo4 2q^vӅAhN`_)oJ)!_@|tK}d;2) _MsII !`>d-hU mb1/E RGL4wUXipg'Z8cӎgc\3YQ ƄS *Ęe"8Ѫ(mz,3N)e"+Ck=&FSehM(xٲ΁z՟@VeKnTt dabVj!x8(ǭ'2g!rVX:1Odlu*Ц ?_?Idّ`\gVi .(!Is*H%C4< ftP5gO.z Kny΢ &ܳ= f?3"dҌ._d-StD%$|_'U9Wpb8j'_i6mvcӳVO&" 7zyڬHbc#y6'/!~|4-"=沋KgmtŒ6CE!iTe!S_ t},ך\9- d+1'%@SȫJ*̀:hO=`hGkr_-kL'a0NTMjd'*W˱W>8Mf4Z._5%HF/5,N#`HcOeaŹp:5\mqofȾWʊD "t:MU?*Y*{IIg?pVBaI|mC3<գZlIgR8p@=lz:m4~әzm_<#'A<%U-Rk*0xڨQ5 @3qI#?#rCU 4s.54G/E> #!{eC+ lr`BM1I + MHnϭTA$\IG8* ܅e^*a2PI.1̷ R[cg?HfS)v۱i*\n'E*jTm4:Fsr}F~o9c:ۤ,gpSVBķ8YPwt=%'XN2~ˈ.3TTl:/&,PTJTyU zodB/ӄӓ؝[=!{ypZ ]<"{efu`vڛ)m)ۅ,(MwվCM~_m 31)r<2Ub>F E3mBlFڅ[A .6PoSܨaj z^ ҃z8ͨ]/'l^;Gޜw bl~}/F;LxCζ@+Vc tvuGjy`rOjS'r68#BXa2JV]WHS~ 4]׃sA ammWHnImFuTA'T50H`n 4$9"!"^39 L*-c&*j ?t Qw)-j3šgYqYrYEH= řN:z*t$=P°L®f*nW)\:U+H- &2 ɌRÑ>mKV"++,TXD<${$NGv}N93c RȝV' r=)8 Ҙ̄2 zpRk.'eCA>+nF)Čw,Bd-{p߃iȃ\|٥[;^$Sqnq kzߧ?sx"m$yjO9"~{|>6و`3FdzF6pzPWAHG2@%ՁJ1Gb.1 Hoّ.ؤ(>IC,c\!R2 ExPd6v1 1{`qG L&! s&Jjє%v˕p41%eJ<+'<'c(Τ0Ƞ8ĶCes@CH-s{\|<=`C!Zc;d戮VҩnN gnx368R4+\U䚘@;ړ/<'u$ g;ӎOg3=;MOgy!|oY)G@ JA6. FԆ+p3򘸲by \8,r>}a8Nr|Z1:(OI2.StF2)8hE>rN% V%DN3h]e-v P>a? 3^jÈ^6^Ǐ8{f+dN6a:)&8j4G4 >vff>Z\)aK.)9 ei,Ak6 OiBݜ1G{?Jǎ#mq^o~i^sY~~7u> ξ4,6xo9˔D.!R~fN\g9_ɛIG6d>v>*oIw*[*yz<ˍ;^t.wHs0JԢIՈ4d={UREw$sYoq$Ȱv%#NBI_R5&'us~z lVҭ`D6˶L:-Kl[6ۑV_o[S},={ jQaٴRNzyEb"/%/٢ћtm41rTjͨ8ݟzqa<F4z 9?u+Ψ_ǍA%R4Re*ʗD_e![%}bvX+sB`*_WJXբMR]i;?UK.m6VNU0j\_c e+U$:l0 /Y4d/]͔[8Z=ͷSfzm+LKU/>0KdJ)Uq/hjxw!FmaʻX CW7$+<[rlHYv l8慉6"]徑[MygѨ[ڿ%xI^y 3ȒH8H-@SRS`,G> ZX q7VK-jaxr5 !֠!:k CuT[{B߿Um@_۱Tv7ӵD;qµFTZ}r"e6*YQVh Mz҉jeBw!Uu:LDںۜ[ "8aKDe:&Ffi:q t\\[Q<.߹9Ό "d"KB3F'cB;@Sȹ\rLN/;I]mo9+};K\vf/Y,A@β$_nI~[d:'VUŧzR:{q  )X :p䤨I0im}"Hmouw^ ndCyS]-w vobo}|L4^'o"f.#~RoNܜVttFt roB]rޫK ~ha"q̀ )&$Ι0QK =#7j+ax Z `;t?O t f99~,v1P﹠eƿ vU bUJv"SkU!c0 l%ѧB$wԬBةk|( N;BSiMv>v{;A+vOv_i;0%D uAFLZM*k,R=lYJ1JӖ^9L|gfV">I@!#gS~qWqgu6Shƿ}I AoxRnkm{D'OpAǙR ƣP,J&OFF.L֤dqVZ}:_@ L 9\#pMpG%(#4Ĝ *C02Wƙه,˯`~qnêۇ/K@<-̶PϿPT]zܠ%3BiS?%zq5oyj%cI(@X5x>Lc-7S>L^fA&K2=0)TF0-idI]Nf ^T1Rѓh]6sXv08Y6^SDX!1PgT@tk, A\x-:o9Um.v[ ~MaU 2YKaҤSvj6}3UV g8ooԦV 4~LӤB!D65^1DACp9BKqOw|clJ`&Nfa^Lƃd A>e_+/{k >Pe;0 jë"u_-$..h(Aܬ=sv^[@uRdFË a=] ;*9jwW 5|UɟXCVQ|p9n Cz7aBrR9wf;oTqY{^o|; UgH\|CrG*a*Pn+rJGsi>`3lo_v}p囓k,|0L\oyt7ByrqWw\oz2[wJ5r9)Prs\GY42dV2_+`FBqL&L&Î;GWDY]`*x' 1AF.$QDExHgq|0ɤ9CʛߦNZ n+t@+!(KT Ts!C!;H,Cڝ.06&>fھXgi kPX% g*lWAeNoyٱHT],*{OQjL5/7ֽ$+"jW/I˳TT* "SUXa۹9WQOlĠ|欃TRB%Pd]tiM282pK;R]C:ZoH^:m@Fр`cB3pU,Vh?N#L kUgN<=7fīj\Rl%psz3N<"C:/'c]fuaay͹d/SO2)mp my}f/:ᎂaS%;9L'a6Ë8> ^7caiVN>r ƀ̛mbzqez%.\mnt|hE|͇$ P9eh)ѓZOξGM釥"( 7'ihޚ=)X⌦~R(apgKz~mTj~mpF2e"|hxz=[o y*YTm#)~MÈxq pra8ӌ0Lm!Oq|/j,%h5L7A2$JY,ٻݘn %+ >gqk ux;u"a'=li/JQ5"!H8OLEpyd<,#Z%)z9T΀ב[ Btqc/v]вZr:8/&4A;D&N&3'oCn }~wxCTd_W]D7T._&mwt`yAuɤFin27qJnE{h;m>S`h st|};$:kFq{A&k2QLƛP\&k2($E{GDh{ge-[l#Ôٹ+|LrC# 0Gyt3olpiy$Kc_N#ǬQ$Rԅ'P(V=0HF䪆Y/ʦdnOK'5/ K HFb,b ZReC&nCӒnKqz>Ɂ vSyEȽlT%EgRC`QkJh 2:;,N3Y3g3@9hp%z0h GH`xiε TmXMݞV iơI}RUAo֋ܓGjv=K~wBF۳c9(lӠ3ϓ@&\K0iJvIF#eu[Yc(^dB6ɨhJ86Slǜ'Qy}߮Ǯ&n'cb׮zmQkkv!/&YB wtk;e5u2szA !\ 7kc)t+ rEQthd G"E{ (FjpTg?lR)?x,~ee({{ĭ&JEQ 1:1 &^f TY]J"8cFgc($?!&РVKbH 3ޗPM=%!H~q$ >\g5-9/~Q~[3Z{nGiLPR 82-r;'`VT_2G٥}{x_<|g-8_kv=E;Grொ|[\_LՏNr]_Ӹm~me^}cckR9w/5P >kghTDRc9}!B*q܅8"=qT)QG?{֍ /@qs2'dwkԼĥ¥aLJٚË(("mx9FdȚPKr`AiQJv|ژ#^Cmekȸ&m TjwS]>r:H-sJlqZU(@"s@2L }ֈŢL-eGOy/͆l#Q(52$؈;n#^z]bs 0j+>eo; X~YGMt&(o'9ˬV]z ,l4lbALUCdMSkW o䑩~v OI>UݹuS [H3惜+0hS{KQoE*ǝzrNŢg[ۅgbThUfZ"$) TDU29\B#Ɂk㓩mH l wC{0OCa H#\LYw㶺_o9>.la"$fЪɼhH;}Wb ZLLrz}9 oثZgM>9ZT֛?y/X`w^鷿VcS6Q"7jls4y\yaPS@)j5*\PmPH])Rڼ]%_yW+Gr,V` BBKO__}2ɼ\wd#}! O4x:,Vp0Y\ݡ66f)_amc*y@pekd<j<:pRzip•Bzs@pWU\;\U)sW%+P`ઊN;\R[>z¶ 잺/i^*)'32WW>zzSbUרC*{b)G 4ՕFc6-~^IwƓf+,_i8I.H @4Ct&MŐlmo :=f*w(7 Y+AwZOp7]FXiw6wr3)Qg[^g I[2E#H2$+SdDp&/C@ q}gۛay4:_N yYY/pbyٯ3\Da4OE"m^7K2b6|P|#'EZ'^u"f uJd5j7k:,`*<J vߝ*1Vu:Nh:?8TQ%ُg3 W&0'5Ird"l>gG垷{I=:*ʑQR Nm87S% wY\4őx_{i QAE}Lf01JpHġ'dSho%%{.iڙ8[H]jNgRW*I?5ίl8xJ~zQx"5*\bEgym$9P;"ڕ)Tj)ACq:QdS҂ ¥w/< b@7m-oQ&'ZX8b:wi7[1r,`]5i/= ŊFZ/ofʧ6֭ʘ$ !zxrtL97y 2"xO5>uϵ4mcH&D-KAB^I4غ5QƜTRVdg 3YhH47 WNk(`@h2!)um=;Rt!@-zfOf ژuk1l|zJ޵{7xa} V5°)R?,UjnW$j_.mYWp9D!6dƑPl0yIì*6LQQ.=֟rsF}MnuZW3U[ha#u`ѸJ0~^-3\hVՀ\԰N6X8{YoN_7ߟt7髿:}g`JM"`ծ"a?CC+[ ]'|quaOGq#˵5 ~0ПZr?-S+\̊p=̽ f~,hM<諧r !}Mڝ1nH׽{$>C8&oW{q|cbed@g!N':^Mήtbu{]4uo!;{_~2䍛"׸UǺԅ yoJřNSw&MotēErV94M&L!( 10xߟ$d*xxV" )0sakIE";ibMsg(T B!stKY1g"XO}H YI!$esEzV N>N DGd<#DLT)M\u>x]l:]v.8ӱt&} VҞb8t9GHd HP +N @G.N:1eL1 fc}6D#H"Bָy+<'K lQMV c0י[ ]q-c`isJJ ݽ>XѴX۞wS /Qu 4>nġ~rMqpj6e=&wn3xMڥQ]fi&!(f2 yI,#ER>\-'LL6QZjBt69|:/`Ip}O+%ib[{aoʟK埼"dj^y[\7=ٳma8s㨲^{~=ԗs/k0wsʑ[׊6{nܬ5 x.GO<-%4= yʇAfVa >;bV$"-6(onR%kJ>zYǫ|W}Uvxwe7SYusKQPJN\̐S2_ *%O]D:Y@dt'\t0s 1zHxmd2|=xgl(>W=p1;~H蛙PHQszQO`D]gtEQO`DQ@+LP POIP|"]@Ѳ)ԲĂysVԱv&݆Jjfn)Ȕݨ:ܗxgzc\u~p:lOlbh6Usg[/9+Սd/ww KA^QQ"j=YJ 2BuْsQ8xp( =\r#@l,lQAJ0Hus72vtiƮX(B%XXz׋4{FE݉I&'& ? G젍*"Ӑ3/PΙUR+(i_}mSy* TT&S۹4` s;J b-$wh5u2^Jz!lSf@㴶eDfJdYƚL" p,RhD,j!ts7*0 "v"xdB*Q0 ;9h(U(. E;EDG4*鋱Re*gb!2*r)1*XYg:ѷ3qH}O#⤙>:;Ӓ]qQwx#.ѲK$)[[P1JAR$Al%e\ zeU %#|̺tg!#.{J;vC1 lf&)kV%~|,mȀQI[$(u߻iC+QR~aHE18n|;EC&+,fjr9&B>N"$~'BܻUG`/!sR @QHGeB'ͨ⊓!kC=eFIb*m{(@*3B).HEI3~am@@II…Pn?Yw&ΖRs>}ܣGy Ӥ= 1XΡxgXZpN SӗM5NC W`HI}0A8/XI9"){=^K!)0:H6J0% :6Ĺۈ^׼ܫZNJOYC莺?_hQ6ϭcŠvz*#l[F:x6?: nSYe&gZFB!LQm 8pڿ3S#v^.C0 2H}nËȏ%8H -hɼBi+:7y=)NO4w#O ۻ&8ݍ\}~]~]@ŻQI/4%h(Le)Oq#gDҷ{O҉҉bHz$Eҵu67,_,O0]"3V]Qj~Ec1yG핟8Z l<̕=j>o9ϫfR{bxYEX}KR ؾ?-ul[O>~a\I:S LA*|T Ե::bH:hYKMQad;H֓.^_ z=O/yYLGp# Wٕ?Gst$7̀ }}T t?6D@LFXTZIR΂0E`d'?.GHĴ3tC/݉T8qH.I٤SwPiR/2x}R Q }lG {( )vK`Mn:? N#:(a t]>d0IjϙR[˓Q:o(N<ʭh*QeL {5Ah& F0(5j\Gm>ĪN=yW۰i|ĻT뻢F55WJD#sr}-fXG?EX;U;mUI&e6^<)xOY\Gq7N&;]tb ;w֔~~v;ʌA_gD ڢ:#JVS}5at 7Q?4MmounYECĽsr,{'AG[53~<sI8oNo- T,᫣$U{h6B*ӣIVSjWR15xYU]>rޫF~p)^W.2lǸwOD08-jjZcF/m֔ rZ_iXJ"!&\fjjF%\X.Xje&}_lIERVk"Dd\S*' ^G%P!ctns <09'Y:3,^2ZNc│3/p38~8.VtiiqC#-= 8`GkcɀCׇP)t+̀;sb&OCuҁQ&+kpyQL?̃=;xpݓ'={p߬{5ϙ1(9;[ GexK3F փJq!rAZ)YVh3٪ Y X bl)(JF&s$:#gwU 'o}zBy>^pB%? m=,Au_1:.)3YjHzLtgO!C%;䔇7R磬AJTSmDPi<&!=H`%.IiRQY щ6BP00^__6mz߆x⊍y5{{0!yW2&:QlH0w`Z<,<1{ ] /^^1|=!dcL%EL.1$ >'x6}@%XJbC}k]reHOɇ6ˍKw{ktxj>ౕgţFolO4Bb Dxʒ/wđ T3OObz6^|i=z6RM\׌ A骾<1}* xK}lqBK$N<`:)G.G,&&LK}wR1Dt.^dvWp(`hQ7E]{v~; ^BnR;wiq <1C%*b&nU,e%B<S,_@K9Ƴ MSRҠ@җ]DqE 4Fqd`eT6$)>[ 9[fx~pe4alv/Oby (Mmٚп`}~[:f½ax:XY pG ~wGW$yG_3M 4640^VpoWHW.76q|aWm" llX@T^EⓋ4|\i 9Zbo"ᆑZ4>nxrHgwK0֖EPE:itbHf-!K>ux~ şMRu:m«&9]w<*fuR۬[!ۼl%m-m!)mG6kN-^i{M+E5ofEkV02>q/%?Ϗ41 ~}8:s翽-G#mæf4?}t>O' ,o7O?^)5a(YƋ: 5$+k^׶<˷՛@mNS4Fxϴ鿪AP;TqR)Ma;QI.WZ,7a[&t KsX}v>7[.~,ObGzY4yEI,nly;;|i03^z6aPE)%Va,PIúƶqY @YD^ Y< |hsz qRhRs:}H326oO=~~70Ga^L7Y&!`}0d;$K2t';YXO; +P<]BgYH$B i9yǴ+]QXIFaWz(!Uߢ3޸e}.[{$ƒU\crCڍ+RI }IXǕŮ@ԙ3/Cmb2z=.>P D~ *nUOo *pj ފԚC g1p 쌡?d,H0KAvh%ZY S<̀V"Rc9ۡj@[3tBZNEѢ*,J'Ep"ɐ$1JI39t4df8!|q=He_BR⢐H5^JtLBY$h#Ze.j9;fW3%p$%PiDH!`$=nEo_HmdCxj"Pzkm<#&i4URy(@N88:*'0:q5fkP8:)ذWqՉuKxBҾit\Ҁʚ>FBZrD23a,BZIk 7f湼a_ edq}sʝ;}pՔY6ć>wGnt{ursg{"9W)?{uAH=ތvmY[+|DBM70" ]MC\!1̈_DFF ACi_5r܋du(E[' 7I!~lS(C2r~; /W|5Fl]#χࣃ1%}obqq/zF?txt VRGK)4h*Ù/Tv?È܋q&P/!`&D`C hw"&j$t5"*δ1)B6#T"5Ԡ~I3.> 'dSV2Q2l]bUz+N1gYs9ޖsCU\oj Pug Pu,QtxNN*^qO?Y{L՜ i*KYUXYցDvC愪C{>Le'J#!5D*o# J%ӂOޤ`6YoyUb zAyo40\:Q"u;˝>S-i4y i $ mQ"4hkępN\I'p:Wr8Y τR=YaZAҒpqx< *']Byq\8*Il:H\t+#7+1F3*=>^?,*6.~C0c8+붃X&x`.f *\mB5o;C~>K7F# ̫ } ꀊ dTTDl81C(ǔKKUE$D5^='NC(75;CX5lkDw 5uwdx4LYhWl{<yn< kw#ɨUvG6O/U֎^!}4="Vԕ7n4K=Aq t7q4tlW?mx|yp _^yr;?-ߛ}\/tO/\J*_Gw=ޜ s,>땰r3]m~Z6i5gZ<ymgwfx3(9M#3?͏x&\tilً\k:gG-{ajG!G%_I \ .SGr+U/,gٱFzsIz#x!;.=}xo]}5?[^xo /囹o-OrW醹Ic}zm_?70ݢw&`<%E >uh{h@nMEͷ#b˛?ZPNpy]2ٿ6ȕ]JMB#c֖J1uAG}qY'=Ih )^K*A(et6s5n0 )% i!ĩ\Q[.JYŜ` j"Q-oFΎFޅOw_I5KU-݌M#uteN7į>eUxbA=BSa#QlNMJ:ϕΨ[nZpKeT`QX.+s.Bu.B".B".B,˽-KMg8yQY$p~֣:*UЏf*?<*4璴Å -`tgYss^4DMR(J4]i0e@A-nt QrN!zpi9^4{([hlp煳1`TzڄBo2qr:Th7S;Q>piIȅƁ9ƣzI-Mђ(Zr[#gG?ByKn/S %.)Ua(4o-{㟮6 ӲE7Eg]Q)lr]G2lѵ>ŵ/T9Bk}=]ԏrS\yWv:]8]gAQ bݧ {r iV䃋K7X no!M-iؒ[nj57cUc31`X|vdvc8zlwݛl"UF:dS}.U8G\FrQu6|:FquLh17,꣒-U E\";闿8w=_ޝsx-:@ᨉ]/ ?? 4_?aJz8DgpBmg}L6h`45JĻ1|ȁ$Etʾ&ӛgokh'| dgOy0=s𱇳M{!`Ui:Fh-_&tg<7h)'( 3 ?.n{b:V;md[gyiݽ|k/QT wA>K, ڨj1FSyqSSs ,<':vN0A9/"DQQ\iA|4: o #Ӿ)g;mzKݱ,yзS1r-vɄQd@n_+aVb!AZ J q^]\gw+#*PS&E)cUAX&(Zءաr}8{{~HI{!N1>b6 69©‹`GPBWGutWhP@/r= ^jHiZحA\11Ekjmֲf6D^j|ND5BKd,(?Q;I}JF ڒT-H#@ !hEuMf$ !$# ޵cٿRȧE*cMg·g,z$5TeGeN-WQ,{x.yTGpkx(t21cWyaD="xW\0AMI%s,Qs = ')2yl +rlJ")! HUSP@V8ZDC2/m[9;Gj8]& ^gY/.¸(z\q-*O#}Bȋh$)wXÁ>qA& ^ؙRc_g#,+) ')nv06:jL,%SRoyCHm)h2Veb0 s )JDŽL(41dN qew(#RLN˦Әw/`;~4q ߢ ]Z6~硨ǣL;ڋ$}9&ߞwi|82 r J9W„ȥٮH-9&29,fei 8JAJ%TrJGs|n !1`QJIU/Bg1KQy׸ Rwy[P<+u5K((=NFiC\R(-D;m(3#IʥD2h$t &\&Tmc^Z0dݪLQQL<sLyYo=KP 8#"Ǟ’``W޳$ :A9bQκ5.Sow%qJb \2сZ)d4&i1w& :-Hz()oz3ӼGMގVT&h,p 8}Mp\p Q=*FlI@`JZ[#䂐hH(+6pa&^҃R(Q8:؜x$ʷSu2<MHH^0G4f9`%BJ2 `XgV΄,6E:0l\+F0| {QOA;<͑gmXDY_ţxX !JGPl}i^Xn"uS^Ŀ{" P =(d'#p>65ޘxm%`\B;H$kE%se G)־ڣP{Mأl%c<ԑrm1n6B*grA+EUe^~ڥ}WRmœ=o\y檰PB߲렙U_IOewZitV%1QcZXD;eYlN<-˛ ֨0Fƺ{랔.`Ɯ+]L3w 25/`5ߢOB17u(l@“' %*y5^Nv/OƓϷi0'4^I]sk:p.AoJ8S\P.)±e)Ce)Y ^hJ~q֨%O*ۓTݟ#FIBۢe E'Q'F`CvuT^\jYEG_H,jyZ:tG/񍪄WaϟU8dT˓]bCKmHb*vV ~!lr:L6LLՉEV NBm@kͬ +H vCSGb.QhpOqviny1"ur)D 'Z ~Θ3OZ#uwzuA6@ͻˣ8V+χY?\i>tG e. D}.*V3F䤄&&kEexĝ5ǞĜwĜ.KDG&%2J,2H .*G[R6\ɷp`3woCqL,7~\[Xjl[Zw- 6AP$GDiFVjA]Rg #4SxƓqg+ؐ=8DDIwBؤ3Y>$a>:1)A S*a3Na/~p GI ~4.?I6rJ[k/ߣiBD4 jWlHC9[3Bq_=8ެ+5yhQ&̇Gs+p{돇yO9њۨ ,ј~8|,*O㶺3WTOܯ>)OmҾSs4VYc^5OO6Wݕfb[,[yl͢I[[:ö\Uwo喥+ pr%p'J?0GIRǍ~W2ϨY-T(%ۖdQmiHщ=]4=7|ljoFd6]O#\g<4~gXq(F+QhxmSɆlbӿ2rnP9le-j3ΘtΏYTnKu`#R)V~5Wk nIg~Տ'ϯkցiS~sHVo> N߼etAbrؓe<~cN._:ƫhu9W|U1E+S0b-٠\VEN<%Ǵ}|vpmه<{{xiƇ|%r1 yO# pppOnƾ4x cܥv4[pG,W Up%Vg7Y[Nͱ\2AK!9Hp?82USN ?lvg#?1fpnW?!p|w{ʇYY'w"ˢLe+D,ImsN3Ϟѳo^^=Q޻,B?zR-vwVu7ȧ͉]J ':zO=WϏ/OdN{}?.>v%/Dg?\&<'}qﷃgZ\ ýcҽ-i^BK(mrhDࠡl^W 2a2P- uS6-.ɵYe.|Zen2J|I΢c殜wY5xeH)lwƘM~bdˠ9A@cX䕟Hb.;~zyϼwV^.4{}зw~sqo.4y,MnnjbwqB旊(:OyOLv-ӱ\_w~+`Lysunꝺ-\CH<AWf@>zGc"! va=m+gA¨܂tv;.r>aI:wm9oכw+D"wȣ~;23eow7]9ݯ3 _/ݺ9z_ױ:rOfry /Y@bZtG4F`qxfsqòqn5?prKg/sQ\ю9RߞՁ^w!O>#;\r6ma\mzjr$p+C;>dwߎ[;cHph_3ƶa="m)6xrølkS c7ykLH1 9sT1tw&ؒJ";pͨPL[&7{v&\Gx٣MH֥di5IJR VdXhn^y,a^lK63F$Ylr7_N&\7 {ck^l*zT[S\((EbC }`2k ep5]C6w'_.v߻dFv!93Mdjq>G;in "V_傥"ek]05v:(NT3 FuN d ] _26)4RF/x&XL(0e` 䃔Bf`'^v2Q1>H*5(bC_d,C޼Gk#%d,q YW0h 1#F82a0XAp겛h Ei]c֌"P{iLTQyAQM@f.\`W"vbPՀ26P??31h X!#5) x&Pth LqҌD;ӭ?st $+"U Ä10?%QO c:1ދ80)8YL#&h Ur 6Q|6(d{4ق1@(qHs BpD,](l ʿ#AO&Ö]ņRɋ+xsJ:"frl`8b[cR( /BnMc+`Y|,[ p oOe9%ZM9(zS xf)kDa 뎭=4Y1TP<00Ɗթp2]Z!䄂+"^eXps3Ö)~qtD="??w!̔-Y $B`\9"#/-(urȪd;PKt1sHh},]|Xp.Yj,(t_H@h$5m{?2Ї)7GpteXJu0K-ՄkJ+bn}G퀸!8&KW VVES /3 Ww0<e;q'8]n+GOpC*zH_<1T#LD e@W pήZ%B% FKj`@A[BF]2VŌ5E?6F/d̠DdQ8r7Ӳ6jX ؂`. rnvj@52ӠmD/cw֔!oiX++,SdwHY0^x@mJi 60 XDsX@B]$ {/)%}V fdc( AJ5ecxSݏ^nyv>(̤ @=$ t 38_E6%m F]wW[@ŬcXkwZsn r^^  =* +|{ڻۮ 3ʬ e  A x EAZ@5t ByīHbm69ʢ͑H(%*T J UH@R%*T J UH@R%*T J UH@R%*T J UH@R%*T J UH@R%*T J UH@_Uܤ ]ZM $(=% +`e$U}J *T J UH@R%*T J UH@R%*T J UH@R%*T J UH@R%*T J UH@R%*T J UH@_HYǦ k}~J /@hU J UH@R%*T J UH@R%*T J UH@R%*T J UH@R%*T J UH@R%*T J UH@z@ϥ\{>=53ݹ~~D (6?% ~;V.`p FK2Y.})%on6DVNWfpvO4>᳤gZx8_!62bWɞÈ,pCGE"Ҷr~3|J$EQ#k@hzfGWW?% ppU{}Jb8yAp["_ \er9{)pUp4W_!\1ɥzIp+n^ \erՋ2VJFR\Ӆ?igh?z^ysFefKl>ʅ).NݨX.E[*_ ~Hףs稅DxaCtp /tSmw"ϨˏYY `s&W闂͙ZzS2SiBlF5j|y 8^wn=|{ԩ~FfO?.xa0mB]~*.ɦTWt5gٮ7ߟYAay2˨R( u$X/hEɆP-Yh]]__w7q.3K+l7蜍\y.\ ލ$/XU;-A^z՜ѿ]L#Xr)Zs4?9ϣѕm3t]0zVtom7Q]W0v؛[,wOzqQz*'̫D 0D"xgjRm.6C¼`?ڍ>6um4Tg+m*J=Eߛ{Bl"EB8Ob*bw]I:i[zJ%岨4Ón.oU mrDEAÕ4. ŝL"NȪdW`A>39)L蕳Vj-tTx'cd饲42lk[. 5ߋ+i8O2] o!)U |b9H̚-܃)>^̫0-+otT:Wt4]PF4)bH\HJw^dŁ~b6}s* X12@-ۭ-Vºkm@18s7(Ҩ4 ,J5ՕO4]&FLoڇύ?G9ulqd.'~C ,'&Oy`~⚐`$o4q<wMU I=?GwC *,PbEޛ;:m6yex=!ha뷡3 v|M?>cWp7A~E YOх"R!R ':i"7yjE0AD~>:?]q|ۻpӹvLG Gt}tWKH#P~yv c[6kUT9B|N}{?0կud|1pl!EỶr{}ǖ.H?MU 7nM=i'!t6v,P# FQ̚|8\ٻꍷ|"|cluzZ&#H%}Q_wt1䯼cK%[^'~_͝л@vo?;?^o_2s~w_*M$,aվ$8#MS]c;t-X_=]M~~T~+溵 w~|~507gë>L7LZ YK b>~F1{e'L_QQTY&b!m Mګ|>Ҫ5 ?>'\)-5Mxh+mxDÛ'< 9 7Y3+*AŰyXMc+'; Q%kf=K2#Ҁb}͘\o\ ^d@kA>Cm sJ٤Ğ0!;V/(#}q9ZDl.6SgUa]LE>٥zsl?JЇ+_FFݵGvgm/1>k8h9rw۹s/}Aepߵ4fD΁F|5aW5qZ+6z0ޞ.`[; =tìb :c/O"Tq4rmi}p9~l82}Soaݱ^*WMv ;ZE[B\r2Vyi\/؜Q]8EW:*ٗ+60*~]n?9w< 9-mm0(ڬl3YZJ&}lֿ k-E"JY/},yP"Q- A+ 2Ay*Ń4kk\mE~F]YyqH= Y 1Z&G8Ux,x w"Rh4b zR!27Z(O;wzt1rv oK*/5FO/M1-ΏC' 35IZdM"dĄ&*5*Mޠ6F rbf) .cPu}Ӗxc|g7gvxtSPٰzk6eqGW񩍯P7,geko߽PT?)7AGFjb!爗2Z% C srFG7cP 8e= :&YtDh*(%1rGlư1 iX  oOe]ORݘČ ~y ?{ avRgڡϩ(ƙTri Hca9IMd O!;{.2½geJ%9i@'툱h'YV|lЦanql j76k6'1\:kAYJQ _Q;I}JFRimI*T!E& 3HgctAg7s?VN}N="d`l "6""ouwP\$;MNm}/#.TlKE0..pqkDKSKU5 "2 D2% CN!grs/(b}Ʉ{ pq/\lM)Ux'9PUޔctN%[ &*o2 4*q$C3e? -o(H)ʈS)a%' a!nxnuMN0gX)SoS~M|\l(fQݧϭn*vug+#|g]Z=νtp5ޏLOCU A(ڶfs:c_Uݻª(un9O[H /r\ēy׍[*U>NlXn֋m5s d2r3 =Hy0k1BbմG ӣkW$^ |t8}r]͛|'#x~}GQkT|l1艢\T)‘KC@ R>+}ѣ3e 'A7:n9է&{<+{An '9h~sl?kEi<₟7ņI*pI/&q΅@¬uiQ+8^:;JyLPF) !(TB% p0b T{)pJs*֩١, bhF~>QET=_ Y)> dFdC23bZ;) ^G˭ř`2I Arv/#9JbFP*C$ςtLKaҩlOyt_A\Hr\3gK(22P#rH04f{~X(CSJ1i!uU ,% |x.;G^9'fs2A`׾!F8.(ŒjulID 9cJ% $F$5ͩZDHTQ|r,AƝc+vM3O6R'/\ZG}Y*!+I-urI"d^ P-C62{DMFΖ7ܩ `3YhESP8Os"K(0=c;睭}̑}Fy7Vؤ o~HdCBHqFi{\Qp5!%ޘl<= 3?w:z@YWJ$0"\YK#(wPJe/ZD]sAI !Wp`@!C:+r` F)LOFb]ݕtZg&Rٸ6*QM2r ku2 [YM I; 9&6q̈;a2Yv&5]+YmWI鲖aln,2[̄^d:оlE+j)Jo - L 7;CSsDb-QpWoUx RPOY1~ɨ`:Dθ;N HA Ty,bx> @wQ|P<˛G!xUQl~s|/Q ɴqSuvq#+>;jՂnF7BoJ{H8J3v0H2`_PJ٬ ,5"&!Rz) t@锐Ȩ9e=Z 1q~$fQZ YI.h%єFc< k&8LIiV X~W `rIj f6^fi >dix]*ӭ iδus##'GG,/!Kb8@%n,VRqJS8a{HtHA"NYHN))!"Wkҥp OQ#hqast Ds [i$I(IPDBUzF<csĜ,+Mx:%{glDץA"iV+e+s2^Ux?,S{rn{HЇ[Cު,M]x41^ZAO'9@>}699bw>VLTƠ{ky"ACRw: AF%bHU㊽Cg a% :D Իlw{\ݪ.R]qPC,Pŭ \&p ,ؠ8J FRQh%A$B^Ǘ _`!G@ (JdȹYd( z\T9WF/zK%~=$O ?!d|Zv`ۻi1PwO ev!CJ&<&0Bi I PJ ˌO7g<^^<7Ƀ^;6k{= Ig>$a>&1)A ,%OWq ?݋#K'3iCb R#aJE 9͢΄0{ #suZ 6n;ː݉/u!37nNX8{b  u>(O :`2hZ Im 5Q 4Yߋ}t6O;?0̶%_,.\5jb?e>f|{O`?lk +! 2?_Mh,Y2ۆ˪W?~/}:tu9w釟W]RQJb.,~FETFWcyFH̯0=CNNsCuYIG-?o4v)x#n\x>Y6ۺj#1=2D? Ig$l57Lx0+`1/jVW{5Q~PNJ~朅"uyxsuWgݱk&m,#k/jt-4w=ݩ`4a.0Sbt\K4,Λ>h^ER,>qu pCH] *YV0φ+owf٥ FHo͚ʽ64<4kq!oz} OBfxmrK+ a5]c:пߪ(D&/)TEƖKn~9ѵ瞐ڼ&V)T?Ǔo⿧tDmj-sP՝m[hSQV<t݅U Jwl6po\C# nǛ&ɏ92;r5/Ua-a6=Z,S]d{5#btxrj1*@HT&$W*e-ףGC;x#*d籇i0yp6`v17>9P1nظ˃nac6A/(t!@݁z/baMy^C=1/{,<8?2}x?rC?~JrXjGZ3aYȔ)‘'y)Y!n~󩶖8]X_l kxι&}D"ԲġMMc^{Y ҙ]պm K'{?-.>w,6w~[L|:PO? ǥvf/^FW{' \%WӢAz= rA{ɍW:8@-x4DRD JE )Z}}ǭdȩ+,%Te?$H&d Vj]*z!zTxLf9lVW~~W/ObLKdU& ӌs4H.).T,hZ\Z3o7(ϤW6kù~`{u Q|ۛcc=`TjTRc:)>Tmz(G.p&e-m`Ҡbjg`. >gEK8sFѓLb9WzZy"$j ⯀ +t3mfAo$G#uJ/:e!(d$S83rv)TgP%av1YLX~v;_șt Uq{U;uU7j]>^9^O~}0UsBT/eUbUQ[R=L͡S)US{,O}P*:Q*Ae$Lfs?q{/;:FQ 1Pcpr<8gHsn:Qr^1L팜HNFA|t4x(K.5tLH+ Vds-̞`hA2'!oh#?\-^k[rE7l `y5=~A`yIޜj*W瓳)2pq'Ҿ2e£&PJ؇Ћ :EF?7fܸo b<*t_̂TU!  >#poꡞM "Ps(Ӣ} 0[6]/2XB-*jH2]1XRQ_pd%A(WaT\ˢW{$EU4Z!6;[,e;*qsf:wR<OUoMkC?^0aƲZ;t%w@~k||뒼+ck:޹8nc $rxx91I/0N2Le7ȷ&uWu/*WSby,Od E[c^mR<+o!nWr`۬|t>9%9{5e3xڶOw"̭#oGηvߡG!wo_}o[ԏ쁱ko{&8{lMk(CnvxcjkÔFڴqQ j>Y)oj>}hȫcjom[S[MpqT \-z M&t ;CjZ;bPerE6\Q D#coxWΛVǤU+O␆߾9Sb֕VSP9akcgpTu:RTwᮒi̦jF+Vtmq {_ֳ_cn"6t_f.nhuzAjZUF-GvFyQyg:TN8PnСa3pl 55nƛKcTQ8{ m975afniKa:_K3d%hB][Y@+PKE'~q2[wOIޱxҦUS"ӏӻStNU@[S㭏1X/BJiM^f\R=5hg&XY6[NCI\PtQ :,qZN0]Y5xCYMϓdT8OY>徝tw8cp)seq:Ѯgقnw/ßFޅ&Us7^2}*oSu=rˣ#H3eF_ǿhu}>4HXw\AĂ;pl}r%*17$e[bELCٖx{ 8Abuq=$EWLMbJSt5D]i+&Ad]1EWEIj1b\tRtŴrUDW+X0V>3=* WvV8o,B~]_^^_5@F~X]?^5صizϰy?yEv>,:TTk[-Z-:O?^&Efy)9cuSg]?Tt/61LEh**ZxgXio'bp'W~ o,X hLcO\]_>FX96ƣX/nG77!/ZB㚫?v3wFoce@{dGon[ċ[<#_Go׷' voNj5fvh;5mvj-V+wk,FvOh:(GOm֑;\ލw '_p10]!C0'=kEY]~Z"Bt_t. =p-Yܪ.y`uܾ PgV3w.뛯d':ePUU\UZXt~OziQ{^qWq "4Wrt._X/Ӽfx])wiξ>gc9 6'*O7kؒ '-Ç V"7MyX/E %vk}OFu/O\ڳYOE֟1ƶ9{}#x+}9E˹NtPޚ)Gx؍rkҥDz}zsd830/jVJ6CAcj`5{NwxE!ߍXN(5&e(O۷kByQ};:%%fWGJeaO$]E`@]1^WLPuEJ(]EqL0w]1en]+ݱX)|O* ӳp hm_ܧQ::AWЪE/1tb`oqIIU`sS(5ׂtQ]1RtŴsSzWt5@]%EW Lr:yF!w]1EWԕ2ʑ JU+fi5+-EcՋieUvqH}g)M QWz_q;˴r.廋شwE%׳j^G%_U*\$`]xwvvzTr@_*JrSՖ`@rc5rK(%&)K_u Qv0Qzbjpuup004z1K6d?)ч頉4 U0I7U5J+]t5@]16H]13i1+zte:V+*PJwtz~aF{UenA+StuhՃ Ni1b\RtŴA+$(%*޷!Ku EWL]bʢAXC 銁{x儺bX+Pt5@]qfqDWL>w]1en&]bO+&9U &bھ6yMt%P i Z c-Ib:.%fі;"$\|!kǎRf"r;kP #UNL0̸VL0̴\l-(pa8C ]1pK[pIPcߌ0%`uP7#kLcZ~ S2d"9,LF9[)b^rʛף+۱X 9J֪i}Ku=]QbfNllաUAAb`btŸ*ҒuŔmQt"ݳL+@ɒpIi)bZٝ] PW;$HW9b\i]WLI%'/ڏU6^WNg0;L jrU6q*Z)s˅^t"B L~yJ>Xj\He5MީӗQ"Ӄ%ڌ,-ZOtF F%;$d/a\TR^>)ч@ڨ4ؓTb\tRtŴ!!E$St5@]2 ]13ȸ(fHi)ٻҩ2{w%+%1 "o 0g+le-+ױX9mT0IǮhMOcWi..AWЪ狩 ]1p4 FWLK.w]EJLue3*ޓ :+EWLb4+He3 ]E\+җuŹ.`Feq+k_4Jo+&I]1qRtŴSv,4 枂)`(,DYš0Yj|Y '&kX AkL85SͣT_sc {Lu/7#8h9oF{)p9>fJW!A[A"irkļeZ~Sk\Ջt*;*fimoF˛ף+X8VF$&i`-AO(ufK<1AWXtuhՃqJ9jpI"k*lu +m+{gq)*XAb GWKZ"-)]WLj:1bZ^WLtuk{ARtU`~)] QWk&AKR $hSқryʔ$U'JuIВ"t \51,6hVH[kPl^0bb^ NJic^Ԧļy3Eꊁ+ubf1^WL>EW/=UqiCbJ*ȅM&yȑRrtŸZքuŔe6+ҕX~P * 4\}m=%L4EW>AWЪt^z1b\RtŴ+ j I1b\ki]WLIj2|t-%FWkHabJEWԕ+$FWW+1cWLS] QW{A~$`(FW+3ȴ}gI,cWjZ#׹g4Ӻ4$Z“d)YgIh̖LJҜXcK5S 3Hx>IE_ 3m_(Q`xڣ8h+5b0^WL[*被U# Uq]1m_Y)ˬ!B UvF΋\{4ZsS]IUXag}OKŞ:iUHU(:, Uֽwp+)j҄JUbtŸIz2آPJ&'3ȸALg)p)-] PW.^L#)b`ObtŸ}N}UF Xt5@]rPPa)j1A6w]1EWCԕzy2rܜld+h tR%Urhs` /?h,icݻ6{w20yaqc#2܃^eHqAo@tbtŸ^"mPљR뢫 Ab`B1]1~:S:V=_&I=J=J̦QЪQ*i]WLj銁K}gL?bJW!LK6btŸ}ȝnyxOFu޵5m$ˉ_\凬dS.`X)!)1 )i (2.e =_O׃;#3 "7c<$BS>xBQX 5x,l2Yn̝yXi1Z]rl 46ňBUmSuḴ‡۔Fb]puۮGieo_I# ~,__b~Px`IN@nߒ2YV0s/[TJZy1~HcLoRBMrbUmfX~(+'4,4K6s{!1'X|`׼Mi=;n)‚0%m%_y([IVR%q+/NWu ~ms@$ӹ1|Q*yf?y9 Wb0ͧSܺ%JnP ~fEѝJ6Ac|uaZI l $@DMFΐ1t0Ǎغ{c-1l0n[j^nI\ג,2U ` ۰+,4Fʏ/qD#laIM=flS\mx=Y=;!wB^_ !j[lXq3zJٳ6]]D)K'T}^'w 7C,W,'nڝo~pϲ҈e>L qQኵK5s0V9~M!ęӛ*ʭgoNlrg`_y=[&c33[_5`V:w2YE1b/ LxwвP)J(*ez3^$M)t0B`E4'gjr];'307vQ:Ţ3m2gtV{0|菛⟰Խ֒վ*+A =w._V}C/ܖ's`L-4]\qAMRcI=")+ig$7֓iuPU($9?`ѳHr@4>wtmp5-{1\ T8, \p0:C82Ꮡؤi!PpM#d\FXv{釡u-t~Ym=5_~,*%%&04P}֣fFV/۳>}ISD<.U$c(!in~aڣ+M+6A䎦뭛p" /XOD@8լ2JC$>(S$#zC/\@P%8^8zͤ$_w_$òZ:-#\l:ٸ*Yȳwy].`Ypؙļ8{J T_zi|΁+fEF%(Aڜ7 tqj;r$Ssmqhp;lT%IѢ2ʃ͑k,ZPg "$Yu{?:\>ke71ێ5ٖsmU4*ݟ T0!tzV10\/LV>+G3U60d8K/FE"㜳(u'%(>1dw<Y=7юSRr@ ͦ~sVX*FBY"HqV9ր8$9~ ߁ w,Y`Yk!g52G"gB KᬎŦF=veDe &Q^ftt!  sAzM<w]*&h/d5z=E^pqǚZ;fIr@ֱfc&ō`hbD^qZ9k ec2V0cQ gы ܱܿ3Y@f1ձ:3IǂZfCӀh2^&+PL :*qu܀qV/㬎E;}9ȔćY=>^Fg2eL*L$w>Anx;ɋ7A_5BnqHm%nKHL릈GT7s\kvV:\܀p5ck^B%$Rc6G%\(&`]Բ:UB2i(ϱe'+3jUXL]+Fc1g8k^xVbP^I6XsƎ.!G+@X ^y̰vBE ĝ_D1n~7Ӌ{Tѫc-lr$T.1TM$M9 n 3)@ XM+r%H Pر!"pz)Yۮa2JƧB]>fwl@ac>G `Xu{xccmgϩ]OiO˲bX gB 2lRWc;7 YNp]5+qaB}6oΠT+Uo}alr5dS 4'U)?%9#Ҵ]7v=dkt;]߯DFIlFm7~VFZfCntgɢPKOهf2˫Ŧ]|L{kpUs_Z$J;;^՟?e+tU^;Esyk~FLubvLx^ 1\;r8Br}+Pb6_H\Zjq%dݛ Vdt5Gt-ai@*3:Mn !,gvDp)2%Kmsw+> \7=&,j퍲$]bXu%<m@s *OFYHr,1'r7̲ 榑⮕wDK-_ӚpRu:0((;/(LOf- k3?z 0]an[1_߮Ţ[oUݴ/}X7^)^ӳ伾*KTJ%DR0!M5YY`ZrQ+;CK߽-O WǀXx,B ñtٓ[n6Hh3sO4ԥ0J\^V^2MBľ17E7TY'kn *Sv.ڕϊ[nF }];p^=?œZuHB'07uXr+a ݅gQKђKfvF\v1GJ$(EfD6YhBa:58b Buhg{+AIGb;ݓKgz[Ǒ_Y;V$q؝DJ>IĎ}t~R*dSŎt7:"_K@eU| |= ! =lIʼnyxQ<ÍܲDSFgS2 Ÿ)Q͟x]:Iy}|ʕk*qΪ^̛(rztS6J)]('B_TlPR !P Q|&b"6$1p?^ǵ!I0^ 8#DzGV#Ν|M!2'u EGv?0̇X ?3ׯXdKb|ˢ{RfY޴m~]j)mU*Ö7HIME(B6aҳ1`AbºfR9Zqi缰X|$ˡ ANF&@BG^SyG"Ug)J(%/g*?949t`ڃOX_Ɛ 3b۳ãR*s ?(y]R XŲ@h[B2 sUKAdT^/:>#?6JgR bŔ yjnz[4Jiچ!l~[W8˅@ӛt%$.)nn淕[ [h=,#]ŤG!Ud "Pe~l.|8q/a푲!XLnD%:]v"Ls;/=1J%J3{ƠvhR/UJs"5B7u$(&*R 鮙]wO|g-`-aSQSɒ$7'#-AjP:櫫H: =R qg uqM[{}sd\]5żͯ,Qi8<w$7,7ʌ9HCF>'Z)=xLDU/]W은D 'Q+GEW g'OSn2J])lECdQ-wQ1m֌fJڥUYMZ=빉䝻ɺ/ऴ\ֺsZEK^':[A"J3E$ 󵤢N4q9[BV!V3?FCzoq;)l8A+i3ƦIz {{E:A_9jvY+EJ¬n &7ۨqǻI]2&ŀ>޾H- G7 A*9x~z uuGScRG]?մ&rrJIZ6R~!ߐy{fe-ّ*]{j r.37dY0ˤZ .* L8~3kn^@ 聪oVQsQ!#R}:A"c8 ;:̵p$Xն{V ziFQM])ԇD3dmf y/┚Y:;`d;PndcIs`6u@e(@թvȊHkAMńy> X}$`Xҗ6{t:2O &%;j+P_r tNIu2۫&:Sԍ%ѿzR * SCM64fX2 s=7kPf}(Y.Lm ޗmL6Pl7P~`z;Ėß)3;}B])E66)~o{i4OQqYbIC*aíFYX!y{R]/Hw,wFc6?^]-A'bӾVM12ǙWxƖoE9,: P_]̫5gﴕ6׋_? MrVEs3F=B^/<vb O0#< `S8^9,[E0%+plԁ_ЌbpL/?H5'Ul9ޘnn淕[]Z ̖voI2V,q(1xD016&q.oA<`['7BBT9Q#CYrwѦ6ɕYKI|M9^5wyi0Q^T FŽ"㥽"B]תePHS QdrH~m.z]hzzNZw-3{[ [`o?=,@*zx˟HsOt/OxB΁JU6PB1RJԈP^@! ʀs&_x/3_!5*2k Xr) QEeЙE *LaT5Tw}Ԟ-_յlI;S}+tȔۯP(^H&LtR}doV}G>˼Y6/fCtњt} }m ԓxZNs Ǜh/!.=ѻ"ѩV@|L~=d ,I1^$kF;m Df*>Nܘd+E r˶%K$1)Ke{iۏ \Q+6ѽU'K}Q*1piem+:=BÄ)PtzgIw\Ac(=N}9Z.">*W/>fiTn'Luڽn5dx`q~ ܭKUg "XM-+ JDv ?XE#9`LxuRҀij *{0 GM"ƨ be{? ^ijj矑Z#^E=:)A_ g CҾDP>*52pT&<%Z˪'B^󋶱y@_2j)ɵ]̢ss NI̚GuzeobB6f&3}.YT#7-hkA!jË+ZQnQpbƊm+-u 4I.y:bv/G0 1Mo!tMߣ[1Oj~RfW0'hmѸ'ݬ~XWOk8E)Kx#SGTnax#Ie;b6hb"&B,^sWlR ZHU_;@g MK3uzz_b e鄼EKitc3Ƃ$RNY%뚴GiOQӄurEшm6iH-hƗvh:kS9nGvjVlie(ͅvf,p ^MWzR*Y&q+ 1Ć]t]&J!iduA޾X.K e~Q C64MLҦFN]. g)m6__TPk BiՔ N1%l*C5k!& B% P3קZ0k\ֺZD{N%De#ºm( bԫz1 QB ޲U0iJQƼJizɫa2/\51qDKVJ ANYaL6;ޫ3˨x$GRIyɝNΆ֒.1c91J" +g`tT&ZAХ+N5)z@"R4V783m-y[~€yVbgRܗ0kז Nd+HTZ"p-P++[2p(\}^<(SjM,;4w.qx&nl=mxy꠮#s|8V{JL B#s%@4%:ZN+Vw 8:x/4CP<|P;XrJؚa^o;%T5pոj[ A=z_^+et[dvdz8A}mmITͺF3_rh!X ܣݚRy]QBټ,nYMФCJo~:U=O#sa46ٴX%x`!J #vȒJ5H֨zWwC hN?7n$ppA&9e+e<-LBi0DJ[_[MB7i`mЯfO!luxpȹHCAY!oc11%qQR~ :PJ8&H% %R՜TSĚ<HFxcxjs!mlI⌒G5<,&OՌ"okB2n|ݎa"Zwt!͂ʥM4͝ =5ß 02y0}$NhM'O79"m<d5F7D6īTNVP)\v*Vy9[K>}TL!}dHGt4wNJbpLAƑ9NıqidT0*3>|1 ]<"~E!F'i`s+@"gD6&# PPJX!JlREǨ 0qݏض7K:'!2̑x_]eꞬ۹53֫Pr-`ZPuTW; #Uf']B%Gԍ+sswm\HBVL7hZ&:^ڶMԦ uƢuMAT&ZL1*3:-:g3?H÷slxU>{m^&AoCv'|I <.T~dnn(rEdT{J uHcbmٱ'OܡǒᓗTjzBiLDz{.mʂOJP}ѴZ5h>5w> /4_%N1=DB/ƶϩm1,6ŵȌ(:LyhG XF4p&cdngn|hC~H[6lhi&H4Ï^/n"(\ 㾻 9DCHmB(oYIkt_F1*/)PitPZI-?{B"|0X^2GPb' g]C?58SQ>DsQ=6~koCB#t約Ac~1CA:?C2Mj")0rPs`k9.t }/eFN3י? :t*kpsᯥ4a|o_ޚΖ*m1+@㽅{ԋ|nWߗo+S+аp0L# r ,cU%7dΓToGeE&"Is7mIsu3-FWy] 4NJXHjspc*i_"@yc^}I/M{ČUY'/h/yFpu>٤㑦/~R:G}yUt5ftc:ٞNzaL(Zs`7]vj$x<& r n$HYa ŘPh48mk#3x ]X>Cܶp.%GUhzw%ȫA; L?`AH)Cp8(fGPo>  s{.# }a`er>\"^pB`\_ݽ-)E*j ]eŴgμΩ 5å1&o-RY` ZOh-+?qS{vUq#R9==^tuߎ0tb) FQ@Ii 9a;$,Duggt013M=ڟqJƺWN-C:+4}L[7nuM8`^͛RGsJ}6n4*zzZM駇 '="R_"Ɖ_o,Z h;adrO})ÄuȠQ"Vԯ+:Vx# 6:Bo7?c#)G1dOрTwb4^xp $v+0_>mTey\Tj),tMu@UBH%:tyj⸇}. "$[|_8RE/+yf$wY-U6fJy|~e_>uXЇZAY/fpaq.wr7s #xKDL2wvpt%=Nk+\'cP"Ğ-ƓE: &>)"GYR%/K qPj XZ&/ X~DzMZwŹKbs XQT~d+{US|<KL㈱y^}[($Ug S aa9 %Ƒ\-\[RًQ~Z?}£^rc9#O,oܕcrz9j@EbsKΔI֒M<+)Ղ`8Fj7'O~TL^ّKI&pG>TIu]8#|6Z _5Qu׈~TIL|$Ӻy tm6q.6 Y[D\]Nƛ/f{Ix =L _]4Yv7FI%Hk0"G)>"|oa<r*=yX `:wzAI?^U5k|Q2Y&Q2wRtB,x itQUә d^JGP+x'9s_ݐ97$qyM|}7}̫%9|tO/Ljw-,0*8%4^~:~MJ:RY֥r,+jhmȱQRu+ڡXk/l:7?OB>3-*J{㺓RLc Vݸ6-4Jo*{Geu6c3cށ-:OֹvKf%j-nXLF)f9P^kAp+VY~AAp/j|!}ߥ-$kiD,M(\ Z 00h9153N,Sj7)-2zj.-*goKp̸{PEx0):EWuԇ}YK[Y[s;`KqBRD -`92LeHKj2+%헳i<@2LD!Ä t*]#ѽfb&F"ϐ9EZ CU\ \`+Q>ŇjL@Xޤ5P}h>6}<!!Xꄠ_P$$akiL)JL" |bPŷld`sB+[!uL%*mLc^X`v2-(dQB;Y!I ZzP!cCalP2.Feݝؙ7! *ewgZ~۟u%<(`yk\Εz\Εz^w֛׀IN(6pI$E3m1Ɍd$1NVZ!2pyݼ&PPPChp@ hYд\lFH-i|DNIB8)NUxE$H@tfA/Tyv= 6QN3ʫr.Y{nP"a6@KheseS$4)ɊR0fE (K^e_T Ye4}&v&t|} Gb%(hL[,pEbۿR-n \Zr)Ȥp Ŝ{F'c &g崞YA6djl#1&iqܗ8;*IJZ7ERD.32futjT.ղ>هpw-S!PjxKf[$8d L@u;'̯.z՘ff qTVHki,! G`|zB,U*hɥe^goP Q@l;g6[_eӬ@:1+$c޵d" =Ic`g0<-vdȖ%Y'sJ7ӒhŋdFbECWu;1J*5[u*UV3!%0$VP`}2P2S ervs "O--%42s:%k˕M T.2@\.5GQxֆ;v^5"I-A, ?TxjIl&|"KT~u2:z)q:ׇIY *Sz tEs$'5W&vY#`]ݐEa譌Pg/vOI)E("I\ نTRp΍Ht&4 /..Qqor%Dj]Ëә(?k_̽"8QDiokTfbP#nwV2K cٵ< 9I<+Mh7,;X.b$F2X*bnb5Kv/NFDZ>cH S랄Q3T?kwoQ.Z8,Z݉}}QI01lV|Gj[N$Eq,ciRrrH+xj+85ܻ))Af+hyNAQYEFZ}YVil5E΀c "VpL.NnCҕͦ渥cmN !ʺ/ʮ"ǬEhw`DQPn??p'syge\4f7' }L]An2nu8e:\ [2FW:ir٦92%=Ǡl3|(L=b|ȎCOV>IA$J,jT&F %XAǃ$$F`:đF&FiHܝ"R0e;ϝSt I+0#H05l󲺰)AcO/^o/8l?Z`t7,O.|l:z97f~ S0A76ƒJI`|qp;)fԷ{)_7} 0nIS~)\+e\EVsiiRK03Dm>7M֪s}?͆nq4~,~C#[ i0I˷s3WΫh@tϡT_ȡѽ|!9=]$[u$JB dt^XHբ,p'8-U딨P!ВDĸ4hQM <$*TIt5!ˆBʯ"lAƭ fBAބP8ݍ٘ל+MpJNߡ ?a4WlI[U*qA՟/5V'dirU>vY Mآy0(roȝAIiIVJ8<dE[5pK1|;PeiT3f2}IfJ!I((ط^SVF,똠:>:pZ!Q=a(E& }&A% w]446~eǐY$!\\͖mf,r"9h97%4=Qʩ_91* Eq(dl yLQ*JL1y:3|Q;W63e\Ra 8XX,AqL#f6jǣ|rngN!#WiG[ AZD-M5XSLHaDqA컎s0ACFmC>[V eoyfyG?`NN"1p3grj6*m65h7HҔ]s4Uxt2NOT?!#g_l?X׉S-UZAXz^S=d:.(BJLƄ"BMAE|$)Kce%h`QU2#j<& 3@ZBWBR8Vg,e=v}zJQղ%;#IKdcuJ{,B3 Ƙ7iĎG[8gSo'aX 0Դzw7miǞ~ y^6nzRw eMūz6:94X w4 UeJ aMi)Hy #M] ^5v'ꪎ')=+WNZgZ Ԁ;|l>.ϸ#;'apuT*d8a)O$ݩ\?$Js43C&DޭGejpt紫'$$\Nͷ=x^1K,{YDe,198@ReOSCqcpK#jvO2V(kLoyW*@Ɓ(ej(5|''=u†L3{s}nT2^e ܍,*vE6qMUw!j:Aa.' tQY(@F7 n[B6R:8i:ǣѲHaz?%nP!L"*aI"hO;v4,[rˉ(p>dtʍN2/s|h瑧2T eOY15o(m*w]L_͔;Cw:g^^Munwm3ܒ#Nha>X "Lzd~]j3ORUNGLFeXs{slx 6Po^hJݡD@ntcQI0L#=f^C'qX<|3 ͨ;1cLc(ۚ dt0dP;eL[ K!o΃3F7Z8vX0D]t \mDع׶M'adXOk/lgL]ط: oS/o&`4WTS`}q߶R=5cU9ϊZ !8'9nALT[ąS3R:.t]'4RVzAEiNAys ټ7ӪQ=hJՉje9HLhIk%3J=Irڌ%ړ8sM=P5ʬ9P`L .!k3 NÑc+P-nJT mna GZx #cΛ7iMnxݱ ]Ewɡed1??&˿>|r?/aroq, g[T'T ܣY%a)Nu0) 'j)=~TEŇfO\jb.J 0g&LF6ACm#5{ZtABJX293k`S}7:?#5iZ&L 68,h G+v[oV(6 \䠢\&Pn{,lXkuzYQ;\Qw% rIrDQ\uOvVlRLEK7 zĚ j*"%oc].+'!'_`0u o;VS`,ike"_~ͭVGu7˃"|Pt864o$m;T*2DLcگ VFgԼ7(VV "e|A^Ny5Kq@0 i-Hyu\8C2^FVL!uΟڦ}ֽǨ0{}W]G]s/t.}jH(Lrhfh#wn\)Bmr8{n ]-/ON u0|>e SE/12FX㜋c`5PP'@aN|+Ftl%Jk#hx|Wi5Xц3,p#[-&&`S{@G ͣ۩_D($e]@w_F{.#iRy*AͷQWaZ4` YPf_ݮP̘;ˏD gQr7A8je̟OF2|&Q2GTf%uoӆI*=x)bhB_~q@p-?}PnQ& #P\.!hnn[ҝ`1=' tz2ӹ5l+73Y==:|^ w/݉{]޽] U;._xՑHizIԋg} Y/Eo)2z aA~ ;>{QnM;1++w߯+5v{#t j'B1y8;=8EgTI]ux5s$귕i WF%k+#b j%kFӪ dhr_Z>3+ewuWz&bf3OLw%Jٯjӿ<֤[m~^~IoA8ʔ+AQd:v*AwfM(֛ .sW,7)6䩛a |KFIu8̯6Cׯy4'QIkh>{t\󌟸'yc̗1NH&V^/5YSRY.)/$4ԯikH1Pt({6gK`y(ky0bJނ9( _Cz۶mi۷/\ƯQT~2x^~ɷtYۓ`I8$Ԍ'*Dw9zpB-VeBѧ$@5jcZ)_m4K/Bc)TxųH&E:9Y /$*TFX?l30]ԔZn G*Upiu/25TdDvt:-< V&VǏwuqH\h^ÁCvzOWO| az8,1k ¼Xq8b1,(x+" mXaF@z{SmfYݝo2 p=7ph5w0<ɰڱ٫BȞM'n-̧q@F Pnh=%crO{O?Лn69,<6 ex~f%g^`: jk+c^ޢ1ny[V殳VmLCN-N3v36ftR˝2ud]N-{ #X)w8 Z5Ţy,Zr e{Rح'O=Ek-8Xy`moq=~j$}"j4q%xDcTή#G`-ǝ8n}kuPrcDsw7Oc|yKU[w۫?Yd1v#Av`8tQBNB"9<#(oOEwW -+uLg݌z'sVY}v> h|" QZ GJ`Oِ1o5m1JS[v1Ɯ]}S3Oj!{toY 6M' B XrNO(3"1r=V4bH1o,$nT^:Q3v)wDu>4]҅sĚZrdcL6\<NО 5hCV}J75hsdfwn[ѐ 3 L --DZB!@1ұ@EP: b1\U֌q+NۓY erN,sٺO5R2(8UFkYA*Q\rHQV,4J`J|d7%uV )]`tO8p29բ4V V dU lHm r׹V=Fd[y=<ːkw?XG'c1Q*Wv wPU", +{ke)fԔl7"™B*߄da_ETl8څ[n}B1Odi"Cu*uv.VWJFֆ+mQ#3J<1*zcmT,ܠTpk]%VބPd֘f$?fE#c-!~4!K&mj}0b((OKZ*%UjӂCm "4ªZU*'.AZuJTA|`x:e(-ZSh~qۑM/ʁB -mXtPP9&Ť1(.qq Wbɢe٘Gena^DA{ Œգ'x C`[!h[Y "L$]%Uq@qPGn&5br Ҝe*1D1Rv{eZ>Dȱg$.Mx{J߯5t1gv%(`|2mE٨u>uso+ $niMFd rqGteZJ&Hh' 1v+|/wmkŹ e K%IдY31c)am$A{Ԝ6Pu=h[**YeWT "3eTآ81ڑ)!m0/BU%t5Ӡ njbm[R ('Nd^ϹRh(AydPAN]\E]%2DPXr)H|'</ M$F No2<aXz4TdƬ0MCBM6H(t.N jaNM= o˿gF"zyr."u2msV'}{vs~wc}{W%+~3zwš~ K#S!pif!>-KfvN\ |^.yt[/.+䵚ݢP Al}=%zO}v83odvګ--?jb+Lo _5#.ܗ\Ȟ܉+N8d: V(/o@B4mZS \ke8rq8P(Mln:k>sVL>2itP5:A'cY SL7뼷_6OdĂW peYL{"Ю&KA[v̡&VavoSMh_el[JkX[֛>njTu~v\y#󤧚7o3GvHn&[{y i~Bv4}vweJ6ܛj%oU}}*dWWXrm4}Q: = iT%wOkwuewجk lciAM:Ɩ\ǾJ瞭T+(HS ,-RqƛeE j:3B |zEyHkZEgiLO =dD3!VKӰjX3X!]y# <gd>;##&՛[5e/MOV T) y)fdvjKV)+Y][o+1ɺ2@uqbDȒ-uh#Ŷl,_}UŪJ+[.xK4mfgHEPm "Scy |o"tb[@Cb)I)&](}KՑin LQR=!!WT IZӘ.cJV>@|EAF/0{$UV {3r`-jQ ؜ԢO! [I8zgEmzzWQD:4rE WYs[.YGl%E}C)Աf^J>af;,*|=ҚF1bMD [3jgH@h̩FXrE2ƤT]V #Kʹo=^ÿڒw\wܸ!(5|go3 ns41Z9P٫/RwΆ}.zj|˅y\E>}>|:??M?`"›{>96~rO^q^ϯ3>>lÕ??-=5]K=xE?!~Y[cn^z{< #2]O񤼶sW?tbt1ak?EoVr(f#]JZܜѡ UTt ߹E }D͙1.`=Uӑ`ӑZm@&AmwCԦ):X_u| AZ+u6D%9XFlf.kTf2.zK@M٧YI+Yk)\!Dy=jcd#5sOC}z1+r,9 oK<gvS&.ޙ7 {4OtPǫ4OH7$~_/Xel|8Ljnޛ?8Aه_̪^?B,ݷԜ| z23_JTHmڗfL %c G8<_u|#CvIN7؊p.&VI!(D= E罂ϩ [ZsfPb4ҢI+cKFPxx-#&lܶv+i]F|xߙف$whPHH;%FF_͋Թ\n)3r{u343:B5Eb)(!h)Je]7Ácg&iU0y㣋qg༊CN?\.J2}|޼?ͦjE,^\QY,Ҵ.K6dOt]v4fuͰ2m>CkIxˠ q&s}$q)+[|:z3-J3 ME .u s J茥3WLŏSBY7(mp !E 86GE ܟGyB[,N&'\Kr3i`KV]Y:Ķɛm"7AdnKk_~]`!#(`#ٟoqG0r#mxl \zDrL?Wfwŧϓ;Izȴ}e^zp;vk϶= UB6m#n3-דAidȻi%Sј{C>%~Ǯ6-$ w-όb2 4!*R(֜~.1fFpcBk-RL*tLѓ7 k8n 7V)l&\$ըpA} ֥KsNN>fnۦ&<'sDx֜ ȓ{{NN Bi-B/ѬFxg?)Lʫ|7)IqTse/os009GB"kӠD7%:4/o=9(5TFa|I<mD6(,SeOV&iy as: K?O2$;`.B'̭L`[`Y]_'mCԎmm$V__=Y9Jj9+Ĥn'ʫ r%v l}$ax٦ۘGHJVbw75Vb;n NmcH[+[|@luIlv[-!:ustʘvޫ ' cܳ4@sT|V)Mt&-ШRi[I"[]q݀|~p%٫ W"e#Mg}z_?`.tILPfBI dNzlI(R 0t/x~9xڂD7V=\u= t#C `5׷^]To|RaU}t%RV>!Fl[ٹ]>Zm[i2VM,yeJc" R 5h7^r͸d)ʳH-f  C6@QN=Zt G4h50z\A ,h7ɹ,ۀX*zҭ@n◙zDs=]J͑/uQ}[$F`GR PXo^K73PlRU@fGNS*ߗڬ5Bm|}6j,Q_Y%⛟~6:6iF;: IW8Zf-{#g!AYlF-h1֨fnv]dT/%$Yl=, zLYFTݬGdfY.$)a4*:x/!"Т)ަ8a6s6s1 M5b0DV~^"6юuw\wb>WFS>#o$ fs*ASaTkf35z3Q*[R2޵arOCm |a|fKZDo>|'='m@S'mOjrC;,ܙ[,KTlKئtG[M׼܌팞3+{u-C2nF1uI_cW0[+8I0n_Tt$`^&[ DlY]JhTe+68!G*##ڻqjf`|%̂-dSu$`!=̛>WZ H)4w{Ed-_O7|-l: _:IQ=KFgA׺Y; aM6֝,A_s?؈z!fIkQչxHO޺f0o; ߸;77]9f-f=w*늡P?(4 sKs0[cXCn%,IZ+O^st?\uoUm(6ljno!BMY7PN:]\JY{$KJUr6aHwVM+qْ͍D/6p77mPB8;IdrN7A-0n)A!6E ('nKEQOv}['(`)/粹፺ 3{bsuRVutM,mtE맺SwD459I{ou =ǴL=o<(ЉcPX>"Ȝ:)Q %D p%}?2T*f쌴7qQ ^.-po}S"IW.86tM7_(^PK6;a"6c /*. 9`". ~Pwf73/~e EAqmӯM|8LZDۜUo:w 1,]}ɜ0br|va٫h~ m&wD;xUԶ١W*V%aR9wY:bﵥ)T\%sc'CmM.)h}eDO D6V":N@%_zj"L KqF\'dVڒL!N^/ Β"cވ!S{y232dc7I9rnCSv\jF/nTU\1eĔcDeUj1Rl1yBGSah6Y]gՕI( )Iɪ5{(5Ԝ$SLӼ/bڐ~y[,;Q%ɛ pRKUiɔQ4`$u)!޽* dtW59u9WWTNg?̓q0dTmz_m䨦G$bJ9yRGRigol ";7wEu#δ߫͐gQD.ge8{tn82%5#:;8Wٳ @2T~Oݩ h!C5 *6ݗk?u eP!D^DB:WmԬUWS~NUW hl26Q#nȬ TMlKB o J(X/o1Ѣ[v{M=,JHTRP4 !] tg[Dd@VWCdJúk뇷5_k%U[?=PJ !ˆ_ݖt6>;1пu&zYR/ޢbyh޲g/WƈtyП/jJԿ|uu~y핇mH˿៩cex{yig*xOQuF'|/ }S̞?{5|?~mfxژ^=z@bLX#j&쇗13h RnV7*?}@? ?\)}pٱOzkgȞ>(11χ?1rF޹/߻vP\O:\~d߯]N3h}f.CPCu " $ТG(M!8}.+@Qܫ /-6nTؠ:xe6lDW:%9~"D40^IN. =eGaQ8)8FW mwנLIHB9K@ނN `3$+%8J&pn!*jq ˡTbMa Hؐ0chlmk-+E[Rzp!lSvfJa k)4:Я4jB2%_5~3hugHًo^?LK?^vF}5yeo^ZwebWU5>|yEwk}0[\ǻ;nY6B0DrrJd +͞A"5F{%8\׋MK4.O6T;ꋂ! BX =ͭ GDKVHQ(cZ%aWUޡSkj>ijwQwbܾBO{{>e]WNv!h)6:q{ح 紆BuKN)wNc@q'@buw|<9 jVr%~̉JŁzm3NwSח}@oUw ~öͣ~ {yg]8$%]&)|5Ro5䥅9{kMi7M(kU=-od=@ N78%QvI 0sVzo^ny4J^}Arvv8Gd $#Y(%K`v>/޺7sQ<1C۞,nv#Mߓ_hL!E4"~V`aRDcىx$Q(U6n="߻[fQ&n,-;G훵S+nkAZM)PJ($-pmk6ZLz6z> ι#hO·OoJ{dF%ӻuyW\s%0}EFejjr~_|j k1{noz'9@*F>ܯ:/>J$+>L3r͇7;"E`jjJ n3٘jX&ό9. [&pH)FY*B1mp)'`.aSҙ ,?! uYNY~}94kѐͱW'"V9=RDdLڞQo_0zf1.R0&$C[5W) riZlKʼn\TG4]74͐gn#futb rlNߜwy=^|lJ LJ,3jTF?P!Ľ@evvFש̞=-k&<,9G/: ׃ ( Grƻ(=9ڲX@--cNԞ%QQ0~[RUwvEA#%ko ׊heq'5j.ƹ6量6֚֒} tE)p|%ġ7!Ė)=H`Jh1X'MT[l:j9OE\drDiͤ|QfN |#}Vμu2QM*WRl96̕M Sl H?AiDYU-!7Q[Y_v$W|{|W7mI#%g qο{|_.g>o%'E~ېjtgtW下>P.%/ Z5$4 8g41s܊Xtx<ٖ\ @b"Cȩ d@l{:[ ڤ|r57z3e R"Yb#IFX+De3/e3#uh,z\QaOTԒ{s E{ƈ:5XŕJ>PB2D$*jfIV`G/Ӧc<,;wr#쎉8BlP{S+Q#7UNy_B{'ȑ[!T{Og=Ρ li|boS|{szxv9x=(<;>XC\8344cKZ0d0s@h0$VdY&dDRƐϹA: ը,QʀYmUD58Q㴔}*Kd{piͻB3TΕЏ^li̼Y_GdWBx!m8`;w,-d&Yv.բ±,TS"N`j ^ 5ODl4U 9O:t;Ytvȳ;ԫYFӍ^96gPS-39 VBK@xPVC(xLHi(%_n9\ގ}r04}>ةb+C!~~mlsNIU[Aُ|稘W0g+mFO3:Y`-JlI%"_x`E^,U8 ׿tȘ b ? }BeK^fۚ( )R#y`la+,-̃(xf2*&d28XuG> q.tpF- i2w]u`GB.x6KyKT&1(S!i#"s"Z5YEa*BȀ/ "H$3Ǻ`::g=M!!upfM7q)Ke^,h:xI9qA(,Z+ 2v[)N:glV,>y& $EB`eDb疐=cZHV-Yiڞ~^YdWf~[ iRCbP}6uȟ.#)`.5GDi IhՄKɚ"Wg|Mqb.mQ3vb39GKRh)!F"ma:vBS6|ޅZny=ŐBSO}}ʨeB1kMBˍni-#V#G$ԏ膎=cbc-#hC'SY֙HR'WfcEh;tN+Qi& rK飲X|th>3:p'gMR@QX 4!QQj6[t3`T0TNGQ)PE^dvkt` i7f1`c$QYpdIyG(WppM_)öbR,gphi@({,Z #46~Pv'aDXG® 'aƌKGh՗:ݰnPs\}-)&n~'9Qsi ,;犓{lHڍj~y~ZIGͩ:w -cF&XIҬr89uΩ꣚iߏ,*[gX₻G^ >bu_,qX:J6 9?& Ef.o!(V8'ε0-`*H/H6=M쇗i?0mեp}Zy&f?4acA|@;@v_9b4@jJ3۠]v=/ d/}w|G>!Br ##`"(KBPI-)&G: Զ~?\뭯[b" f4)V]JhT  }b|Y=e"X}}-Iyx]LqI=RZj8Զvdt21>cI2h!4&xb0T۳Cl]O34WK@0v]`RUZR͚2oM/ںGTdJqEߔ#U샶Kth +C2CˠeޭeVι#iMY7T *ɦx>p$LLbNoy6dD1E 9kHKJ 0!+E8@磎u`ɔ SS4^QQ.¬(]T^&X/2Hb-E*&}r1p"SB@GryS,c3[XY"NB, ;CII ټ1E/7|\k]|5|u<^r6Iw^ґ@n6f}өvj;v'~0i$\Wg pqn&5{gݪq2 EdT[xjI̘xv,uZtZOxJAwlG^NEkp Kz2!-vT6\pZ~~bYq_+񼥆jCZ{ _RreбgLeI9DjXrWLw//N4Վn8hnax~D,~ybAT$vWk>WVu)ڭQ/z/캢LUi]W2vW+\q#jfȫq8IcEqUSc9DP҆4K_-&jo;#L"{oVF˶Ǽ&dߘ#s0;o3a Ynކ`o-6rIV+K[鶑KF%& Doj4$Hf6˫>u m-$0h7r3j2U41WsKvKln۠Ah򊍑G+(Zfvw>/sAGv+Z15GEy55ՠkЮAf,hV1X%|~]-JM'`5J8x#(gˣ/RH{폯o @Lwf,Ka#fp+hsښomkyҬ;Tu~.V85+ 5$SŒ5HЬf7IthgE)%UρX룸nNgjţQ9eWIu\XH\%?J5 $HNy'Vjfw??: ׎](SW!B) ٧EcZ1ͮ*RnQ-^&є~wӯc~G?sgG5#ǫϓ,.g,^~mu-sOR8tS_OV{*2;;L9K-bH (zZԅXY3U o}{2O{wo⩞thX5Vf~o~zbN5޶Js?ُ#бs{#2y~XI*U-:?}ìq?Rv>::jY~\y >Uͭ*wK _D `~̡q+^=ScQNOFg۽:aa= բ#ع.}{bH/ԽT_ir텯P`8@/j%NJ}3]0bb>Xb3}Y;|%v!᳾t\E}e$芥 dvmaZ+ zp9Yz yk6 ;t^f i?FVy@ݟl/"c~ةZO'f>hw%[4#%I4W$58l =^`; 4{['Jv™(l00tg.{<i,1Zl!O3krh-®{? 2Z)ŰZ 9F..-~m?'}i មEepkx,V2п. 6Zc/?ԫDޝx]hT ϓ٩0q戸x/芟YoXO^»})8s^}'>꣜_6x,V{cRW'jPKN}!VbSSv9<ǧ2?l&*hvU?d>\ F|FS1x0~~ovj۫m l0N}-tUhHW2o٥mcv@B?{W֍J/]ө+c9T'ǙL9tWXXeesE$E "X,8n_Drq7AkaB(N -ԧFS؆ e5)=+Ath֌t Bl( #vI 8W N,F筈8bDPq2B:ºRM=f'%L(0~ˢ9.˾2=]I)HE5F Lpʸ޿JV^0 ݷ{ ]h";0~5+ܨdD`&Z" RO;PT& $ȀRj. jJ>2z MjZFQ} 9;2; h ,\n984˜N>ی5w dH{'㗼˭9VLoW:12w~)W 9~J3B.߹J߹ >21E '妀!Tw~( G- Hf64qʔiIS%iyCҔM`{9%9l𝼜5Jُ{,ЮgM 7&vy ]#' YHFcP\Pđ*4e 28 g2%夛d.?;/_;XP":n8ZmQ~8jgK:fsסzͷ6ZNE6G e$Po~ZfRQ kz)q {V.߼QuA.(D78< l俜-:p?r\Jj4w?/Wo3DO`>R4 CduBܬ%:='{##/iU ؿJOwUfJ)7rE5W1}-_QsN nM3};XDm"s? B o e87=c8m 77W׌JyqښK9DE{\ ۂE]jLt< RAwD{VĠe# Ki!>}kH{8g|HB例Wr!W5 )F&l q|7}0g iYu{{^Ҹz +!8iK{/OiExu"ch~oA+s5A hv?F1;$7 OMرY.Ң |Ӻ./>s9}w?dv8oGC[gwg.x|˜,Ur4!;T,b}7yd7ߘ>[KK5h)K}OjD|No^UNnv/2rcI,k s3T.h"rgWԅN+9My`2ȳZTyc+/.u^]4iu~i$JSKd)G=TstD- NHőA 9< M;A`l/]/"chVxގÕF&GNiҞ3IbTD'H4y)IZ \ F-QTa&QK9&ǖr6&M<+7_zqO*i4Qc"K]'<; ^*Dp6DPܓ] F(z2 Z^# ó6OMo^UVDsIPhfF'sH24" 2=N"= '2َh`<`hF?U 8J Wx  ɧGe;e2I "RjyJQ)kBKNAy &hFZb@V =mLXiLȱHTSi$b&Y&*V,`JP8Xܼb[ %-dR>@^y͌6(<@6 gu)(y wJ2;7n( D+{׭mkYO5<>X*, EԽ3ٵd\|j[)M.ʹ`PK10:Q(Q͓5ё2אe!Se&_Oaqbn.Gξ_}_m j-\pR nD;2s_VR' E1J`avD|J^\9)OW 1^Kˍ;l`Q!>Nj6_"#kLӫ]h":$Ob~Yfo&2n˅L_;m-yܯor m+ӿnm: |{ggG0%\6rDz!m %5MVD`aU6`K(f#mf(LL!RDB8J:4S*=@\H7:N@L+̭MG$C]Ύ0@ ?ɐqF7WI z3 _}^*"Κ1P@1("s#X$#T˭zP pB(}\0" ' =݂t(_t\XE='8+k](2ϩ:`)є {]ϔQI+ %Cy\vۻŀ2nR*7>V>o{N0]ȃsSuX.<@{NVP_;n.9ᒃ.9ᒃvd;xʂMaR TDD RxmrS@As5K2Ƌ#j|mXuկ,rj Q}ɯT&e~I/ٮ^5\FcExiH5!^v4Bm ڭrgD].LPCv!C\~Nz^IPu͢* 5of. jWre1$N$F*iEҁ)XB*-ю_TeSp,'&yCM/3Q,Wb dGRJ9S΂ 2)7\#N[BJ~(ֆU(0C[R~A8=wIMNIDQK$M) H>Q{A8K’.( Ciq>PڙnudGݠkLx^ѧ5Pd w&bE49p šUV6(8Xj|8ӓ^~ci508d ".W֠ נ.E8ܺҭ[mI#7'(3p#78Ia[w0Vu:A[HZ,GuEBJTļj 3`)AQp"֔"DLFe*8Q%fn"i_^ s nUTOVF9FiFh^g. sElK.sKE0YESpB1dRJd34ܑ1@ci5hBz (5rG!u6 d20D:ᢻE!ls 5Zvp^MwV &V U߄Oyw4IJbY: e+Y^C;g0Pe{K%$1G㕭h*vQU5T,A֒^0Mŗ{y[1}ܠ;ZFw5 \עmn8uDeQyڸ-[:!ְl6zMA׷)Zx/b<\Hj^JFDžDVZNgeܩR/ &LIKh Q]iF+?d7j>q` ;aX.KFV:kif#D5ɮz6+oKٯY]ޚٽb)Ă|絺x|ë/9Ռ> /X)(eDaD4aHm" F(Bb2W"rJ(|juPHO# w޹\ )403ubCY˅ulb1461F#K(yփu5>{60Y(سPdĔ #DYD+*JEHUb fa*2DuA%xqS)%=^n#VpK 56op2 >x SY(d\JeM\%!!1R 1W46!3  ]0bC"Gttq+B1DȦ`* + S JcJlϒxKZ+""Ō\PIY[R)+'Bhk-,\btyf*` S)E3nTBU܊}TiS.*8S;Dt}iBŭȘcZeY&cs+&gϾ|N,@b Xi.`U"0Hhz5Wl/*r<9 :+hX72RʇGHAxq.V[S;!vZ/7];sb .)x6?{iݰךQkE:.>qa7W-dKK흁t΀-[Yv4KC1ʀKF",BbɆ Ҧ߫,GX`l@hkH^'sӆS r-Yܳu0jiw8OР6N*DG1FgRmUlV۔ExO!CRZwׂD"v~)-4ȱܯyY+!ش(Bԧ8@8o& "ОEy,1bzLeII-,|I?"~f ~L 4'O r(C,bQrjR %MaLI ̔W]Jbwi*BIybfw7qf=HHse7NCxТV7ZΫ;Zi8`+o S7&{^X0f-d esup{7P,Gr!c!'N 8/(2R㘨((D D*2Dp`1aEex˃41;g~cunzC<mG>k?"R.͋+z[۶0,"G;9fje +?j>|ʓZL4:9p}CF$5 ] Qk8}L?;eְsO\ބ^7ͯ^L{xD\]**gh =gZhmwtF[vތ?3isMsE++QK (j) TΚqnC?9P@ܞ;x2^ #hx:vHi R7tFxI*=?/{K?Xd /vG+=Tt ro?p;4`RjJvrCYVT<Qhg ,J٘/ uIg˽|"ℕASIW)D.Y>+%T?wY분U-lH $HzguORд][kuUt~^7e|rfpuR.ͭ11OysYȱY.9EI*DeD~aOi[bmzOlV\L_Y޾gpW+չ=v|cR^3ZF'$>AS̨'f1Wp,v$翶(Ia&P"4 1:L tH m9[-zG0#UNLdl>hW͜>()<< mY ݃| sYQTPm3ZH=]9T ~ wLHu{'@T@c/RD1"kNr^)p_TVRkGt_Eڐ1F8%)4$ 1($( I͙XJr5BT@YuU_k 3vU\RU)mUQ“xFBjM3?#`8vuSNX- lU:9"^_zApD,ˎ@ȸqei$'p5 F:ǂlvwNE龜Rvy4$ױ nmm1:S$t nz;m熵**+2A`Yu6t8@cǁ13FubbW@84Y:.Rw;];ɥ/&XNCCl4i-OO/gͮ^MہLG]wo/;)-.^ n*G8ol'7uwl]n;]ޔ1n\ }Ef3k=9n wjI^mGg(4u<Wt{__܎'oO7v\dsYf|]~l`QK-ol>-{5_tʩsN%r}axs.:UΧWK/oq;Hh3|lo.t:z뚯g/W;MIk?Ga4oߟ0_vz{{/^v׷FΏ D_v7cƜCLYx ,^bb^3J 9ϤФ42 6%oF8ퟋS@d&W0̾>jO$ܧPQ ^/ ͧ߆/WWTMϤ8ۘ\wGytS\rwڥlmu'-?';Bc ,{ct47D ֐W@B;Fgk 01>]·tXb KGaDvٜ9C:}~`fQɧhfy$uܻ'zݓN?]ϻ'fbN.w ulY9'+2DUzIVNRIgj`q!`}H($)(QI qDS#;rے>I3R"5t`OiwX>biؼGF{GTr~j2+LkbYlg8Fd9SH8RN&aࣙ6 SZyښ I>X݁cexoǹ FtZ`5D\Xj㎒Z\\S {V[t1L|0&QvW Z1&\G@90me],xSt SD"KK6϶>?rhְ~{\V|VI:XTNUo"9>OUϧ;OU״$h>Mَ#D`1*`4Fp$RG܃#B/QJTn}#,pO`xjrg~.'hBlE"Oȳy"Yy.nc1aG"E ؄ꀅ8 "e@YDl Yd2WlGnؑ$}e l?bKZu$ϖْ$y$ϖd1Kr'W;,IZa aXRɎƚ-IOKҒ TN eazƊ'f4皆*NbbX8DR)+'Fc!''aL(e*IP0IZ#^ ͻwnv%J~Xm{`qe! 8pLqR%X꒜l%\3 fz]sSb~~y!#0m6eK_RܬtMUt)%g}IW6dK,Β3;˗K,x~h$Juɟ~()Ihoجv)jj~AD*XyΟGuJ&d c#O :>A|@8?>4?m JrrB&(,Di0.xwBe :ό\fIE߰YfT-/z3X%ΐ!q˫RL%=~|KivG|{V .v߭t"UQHUTjHG˳PcAmQOiW2MJ*a"(𴤚c:L,1,,$UCA B5 wF*`sM۾(w2-1u\54]1 A"DkE40Cłv!dd$CR,3ap׼X>ox|btGrϩ_!&TE&g(\xӏ_\^enW/W/? bʋsz )Q=_:ѷZ^FF/g_A06/IǬܛj,}6oph1MA/c{7uwB-Ԋl K O10F m ^&t9YtA1`{a%俎iPF7Ds95ӥ ]l.\AD8?N߼~ D%T.)x+l2%Cf@r%\Fz&E=꡿6Fe&gy#vEbykMsv73H,B0R!L ZvN^aI6t˧_Ͼ6dMjA7efQUL3yMbt)5{Y;dZЇz2ך8CYg;>\VVjUntp B+pءmZJέ-ghԻZ(`zL{GzpBNrB>|:?8!'luߘ$bD^"*0Uf!e~Y~UJ8!=ЗUB=";UwIkCbIPL-mRJR!y"9RQX)u!DOBpQ-#&P ѥ0^}z$ilUQW4\}qbpmp oijI^{[ 鷓vmz{,ή4rv[ݔuO$'MB4B RUl5/6EBL~c"U8(4C?=֔ HJ4ր,;yLWCj=puQI49]ԓFZ]\n:ieuR4O~qܫzqޢ2DAZt/(;i8//KNGbl +4VXFgW,,XE{.[jQPObGQz nsO=θ~ӭ)x{kmp8 Kt%ɹڟ$w^w MfˑkQĴĞNEI"h6V2gQ8h1G)$f<$sSp, Ύv;|3~ $37/S>8p;́@RlzWb;i$$j;ĭ&;!H56̨pOgA&fM##~ `yF7rlSHhab|]KZxaY3΢7dvy e̙̎k}VNu&%mD$q:tR**hFobV]=vYkZp> C\M!`!>Ce-tY0Q{.CkVhA8p,Uz]?џܢ*/ЖCDp߭<,}=|w[6 cǻ,^>Baǂhh++OEًB4WuNlqzYHxlp௯޴&r^`;=/zSun{v [Pkҷ,=r+kr.eyCn j1PzCE=?ǒ*2ߝ$Wf/Q~r,t.?}Uc=9_@+_m=xSrb7Ȝ"Kk77T4`1),`E^yCpURǽMlӥj!L<~y{*f3n7ܞxeܞZ Y-8mB%\s=S ۔H&nVbyx/q^ҏVTd4|zqYKϾJu >l{|B x{cv zc7/{$3b_e֟rf{Ya)&kQV4;0U#%O}hr(+[rD2}rLGu:*"NBjbL\("KAZI5/_*wNG;z^vn'II.!W^ZWa΢2*)|Qʹ6Kޚ =GhgN i 5FN$ 6yn&R?H/"lMXE6p1MFȾmV@RN Z2[ZiY`}憡SV)K"M\"m4KS~ NE]Y_W!55h<Ǚ%`RvH +D4C̘H2LjExf' "Z/ǠY{i pBɪgB |Q$fsέV~GMreZV b+`k0@ԢoLRjkUPٰk娱hUhh IPɐ&ff}@k&#&8skhYZ(Lte&KƀzW$X>Gj +5k,ͧ݃lb,BdJMEY*ViS!Dr^Ud%´I /@4W:GCY6i(iA'Q Ydh2 bZïҡ^=CCD|ck\[!fAx%2Ԍ\CZhL*1GJ2 (h9TTLDB=iIXqh>E<"QrDŽ{oQ2rIXs'Ai2 v8(ir0T#נx Ta4QxчhXBNyttAjoFetFnǨpZqFԏrXcۨy?+iDosZpR+6Iʝ 0*֐Pv7V4|QŶM5<(Anj-n^S}WF[^8q+:t&=(KC wdd8ZР\?=Lpz7{F=ޚU^,ܤyv}œ[(CjH@YPPVur&CRP0Bv*-DзWSqh>=r3b.<F$Ljh03Tz)ƻI ]m,( FʋC8΋J΍bz /w(X4m6*Pƣt\r_ȋ21ڽLS1Lšu#/[K?w&T;$M7X5qP+`9?\oaI'u߭[OoY~v͊?zs 0HLcN!tw !` z|#s?t"h->}2jl-3 (<ד*۶C:1TGU+ctdt,Yޥ&FVhSix%IErTjMQ3ZV)e,B+PA֑Grb Ԧe$Zco*}soB(XScѠ K#"S5\BԸA4=z8gd̬&e.yc%Tgt["ſ ylFߍ4,Y<,g 3S~@'$n)]R0?~o"@1bRT\+)Ǐ+PNg֊T򲤭ym3%4i U@5N h}4f7xGt\3|n2 ̲:4꜏GzxMLEiIѣY]BL5D,XY$UP ^#2"hEOt 1~v]FN s<%0cRw(!Fkc~ H (@k&Z$Ծ)c&GexemV.z'g"-FU)}WvP\MejֹL!c\3V k^ڧ hNG};{) ]O%SbdⰗyp NBy0k uNK-!phН%FԠkNPbKÀyy׬;ٜF9oy̲1rKr\rUzߝ[A&"`+g)Z5د`ujQ4K[ɒiUGdG nqJϕS+O u'h s>8h}*RXT .Q$DZ;5/q," rj05egpx嬱Ê0B;m4DԂ2<ŭ!0ML{l\) 4<@l,Iju^kўxᨰQ) L)RP#'ԁr+RxԬkM)TDg:(5rf,8c4ZI1C <}NP9Ԩh]AGNzieC[G3y4L9/ҙrΔe&rM(>KSGQitzcG5ݹ_*o+~=6{c?|q,f;LLJ^2Qlgw="dK/Fs\39RyˇuAg2Nq;q$__tZ*"k=HbxSר{=, >~% pIv^j:y@7DlH?/s8ZuKgOd8/c旤s~0yace;%_p¤GӊYE9[e([y9_&!ҸUgr P):BQ|la\Lj}IgG0C7).;K~r߼@ ;Z4{xqm2LV&+b| ?9_=; |li; pT -|79_IKIbJm*$j>%ꢴVZH 2=&;#''4P&1vܪTثZ]J(E㚪! +Wu*"/;Hʲ$b@"4O~RX[jQCDGTyhRh(Bɝ0+U+縰REԆQX_Cdޞ>XMwfs"#QU夲 dKc!j15r9fi ҕx& Y\ &t(hfZߟ "k*U$TZj[ pMiJ ?!uz btp*РEF c}vxbԷHLAr6@xn֊jF ZQV`[2-C }F{f3ENs"ۇU:4vI-LWI͝O$-nLl[xgpV7ۤ|CC2ywc_$~1{Bm ub\5)羬!i}hžO?? &n&uy=*]*Ng/Jik+7 R=Ps vŽw6/+R]mvUxgY&[;?@)ڈ? !)5wOMd/w߰Z?T/߽A)oO^̶ZR'MW B0յ8w<1pǏc\9F.ެn >*9/k-lDz_}m[( |5eNy\~A`( rw8 AM 21$_\D0> Y'+lIj|)7)qs5^MJŖC\NzWZC2T~x袇j9:]|}?OpEnoXw& 5r0@)Z @A%O7j &hJU:'?6' ݼ?2E?yo SVgէϻaۋ!q-N`#kQՈ@%dpeU`# Lvb8{S z SfF%dI[SqY?̌B KqKQ8ʋ6g#I†ڔّha+Mtu|0^BG% Fe^%KԵWZZ_ :9͹A,\syȉfi?Ny 8\5`B.ic=dcp`wfZXжKdL-I3dL4#dCnc?̏E c䨌wN Y &i}11lό9K[>9/`وP͗&W8<3eeYx-r쌄TTܼa~0g[aȈpr@:u(\x|,3͌}8 fCQLN{xW1FV$xb 9ʡagRx_ٻ6Bd"UAA&s!QIdH=_5)E&nE6{U^uu ɚ3M>H^霥e+w&̣OW=1PʝQkN"De#FO]I 7ԭv[cI3:&8W QJ3DHt0xp'\V/ci<ݚZPFF"c 40*1\ C;/]6'#9"J$Iq/m,b b>:g cE1K%O"[ GII&Nf$4%<9[d Y }h2ϔɊ֦ido N1cD ~f&ϏCEv&JϦIIdjR>3O~NT;T8J Sɿ=(LZ- 4RGF6ђ8!}J ͏Xa3ebl_Aeާe I1dZfDoɈQA$t+2-7w6tcBD&ӍNݑF&)=uF< pir(y{敛ڱPiRn:p:r7q0x{ipܥw35 nktt_'n4Ձp:nj*oz5aLm5֍G+,k]C?cbYc[~fc/p֟XDj%+*mda3_]N.EYXGzSwI ctTNNEö&GD{W΍kBEd-_;h7/j>C߸}ր}Jjvœ+% ,եȝv@"KL!^Q!q bIDs(MpM9GjB䮦1&(IOf 2n.xc)m` ż1)e)kne}+KQD61Tqͤ>ldc^y_o޽jg<ٛt4_\&ozt@+!x-FL;dUāLèM2ۓd}S޽@`=L&:PVxWߣv V/YQ2aqtD[R@iW!O+D-%1@슪=iA1ۇ:VvQޗo<-W$.Ђ7JΝ^f%}Mi3 !1S;UE-qN1A?tv.-qv`M^q}AdCu`"@:'N5BP65" G/kbzosvlR/xyY÷I+8NJM( (#NKo^XkgUWS6󹔍GYƃHMcCΗ& oh0a O#Xa>ɰP-1^[-9/xxՙ)XncQQ(/su~N*b7qV80Uc^: M(y藛aTo!LO&+:\[QN[uf\T`YKcF?̬'ߋ Xnlʨ5MQ+l:&5V;ǢSu@tkւ|&Ȧ>!YEOݚ`i]y}[cFE6, 7F6U;T.5V;Er8tkւ|&Ȧ{8aѭ 1ݎ8oe`֘mѭ MM zk!ЍQN`+ix[1nMZ/DlniSer[|Ƃսہ#.d&4:Wb\˹Oe/}4)eiiǤTxؠYa!Jښl%JZ@Ő*z^; i_ܻw.SڟTOnf4^ g]'_zL~{Ԙl᨝:R,IB^Οʹ͝_o׃rmkgdg~N9uH 1?ʮfM=Ͽ~4@y[e,U(ޢO'~8o 8tCL奔Wv7])f#x.o&%L8&Ufb`Z%9mAgp"+dF#M0MR\ʤlr8M\xR_VRW5{}=ϑ/~i2E܏"Zv'WRR{+5l.g7oy@WnG.KK|w羓YΚk)<o88VXZQ|9e1x klϻwIH[+?n(z3[Ez>U_K7c&YL)ұQ%hՆ5ȷ//gA DubWfWw O8(,.kuSoC߽@3Wo8xշyMcUYI%Ÿ$G [b/B_n%?jkƭW˰;/ bm}/|o vԍ0oEt)x6 69&L6q ɉ*ã42OG xI# JfeU:PP˓gMHn:'3ڹHxMo7-^4 '%0Oy>+ˆŌ+lL DRDkH*%2 Q'7d+eAWS0#Eҏ~~fϸmU) XU"5D;YԆ+%uČ-Zr%"}%S-\ND/0QfIH>`L<՞Jb`QPSfjZd.%8zCmJ } {KHhue=\0%ok|  ld,38\$(^UgS$3" f8wFgn{YKB4<%]$:y тFBRZ&| -32j51$l: kg[O R\chvB$p9/>K LHnbD)Z" mْQVtCD@Z_ `eens8gQĩxll$6 aԕ)F!zƕΒPJ,Te12i [ E;1$SFXmۉV@!Yʢ2v0Fb90۵\>8/ىI)SHIs$”h)p$jgh 陨DE\G $:ǝ#^#Zb BBURB33L5tL@)>o\ IFMT#aI,x32(pdC)ӄS렵"sWHO\yZpݪ m "[1b%fCӐ>:6|+^uTU4(q8D+ I N-Bʣ,#B1L&YTdFj`H#c>u<۵Y "H) `7阭QO8"D a/@Ff ZeYvm%+Rr 3 V X0QעRW j$`<2"3> PG3^@"hU:sLK9jxyO6" %Kv:V&GjJVwHH+@Xdab..H6\r$>f-jK-u;vzuX:Xd!~(3N@3 Q,\ 7SX  e2`H]&^Nh`(@?@5+3{5df]c &AH1aV5(v\X|g`Ii`l-U)D!V1d@2GPF;*,q)U@ߘ"<fᖪX@q^J{J8orlaRRª1^Q@ISsP6&A S ~)G < #Oa=`@1 aX8akom4Vǁ +sϘ]J{ds%!8I`>2pQ\2,hĹƧ #M)аv&%%DT(pEV%tfYphJs SX`qa^eA{ U CGDLWD2#A)`JsDž%9Ēy o < ^J t+cQ<o͈ $`uW$ 8Kъ`Xw%~f& ,R`;,?_nLhݕ>Xwq2&敱!r}, ,W0Šғ#.c^s@ W2.\*v<|Afk)`*2'yJ>U6:=-h=c;V0  !:U PFMv KYzŌ#(X)ha@ /X鴠ʼn pd4L)0r6ӫ4ǕX@PW*΀n*+!f330U F=)Qg1(Y:F(:.r foѠʃqՃ(ߗl({.{R8<%"`r:rHUy*Ds\A7\MsmdQ.eDw )aIDU*[[5{BXo+dXoW>$3dE\"SwʜK(#o{MZa [PQhQpRRH qXx3POa-\+uB\̹aKVp `NiT9 Xq` r5Y|\g $&]L` SQj~\Zr~sP6WYX'JAT_TUC U|g2 Ķk{b?uH,ɯoQRLv}N?Rl>^ަoAϬl~J~w:4<;6J{`yE<%)yW6|S]١^~~S/e8[Vu Թ`K]ZX7|W,řVYBQYmEGHFeU}ghlTVEo\Yj{z6/lyCN.npb`m2O@`T&rS3XŠc88E5:xg9e>$w|/6S aoH\h/v.tRߤoCK]ԉZ2|u$.10OGߘOC /}lĢ4,&._b}J{̥6%.]_2:SnM](KNe$\ ˦6Һb5.Ovi:'{ ˕3v̎l#zJѲSɔ  ~nY9|! - 1\Vo[}EE*\U Wc!bN3@֌'%=56}!!!Ҏ idžcCڱ!ؐvlH;6Ҏ idžcCڱ!ؐvlH;6Ҏ idžcCڱ!ؐvlH;6Ҏ idžcCڱ!ؐvlH;6Ҏ idžcCڱ!!کS3nMY2LӨVY߶Sh MȩD;1_\n<.' f׳T]" _~/n NVf>A-8Vr6fU %H9ȗ߳)$8kH23b}L\/zYgnb)KF"/E"I&*U0^ꐁ31]q(3[Nm̾+gy?0~g ɞOo>2KOL~xxfÜg3 8<q}U5]->{6sֽH|~b1}{3Kcd4OΨ`Pú{h|XC7M֨w7`;g"AS7`tҝnQo㖲.r p$ ddH^?r{GtLF BZv4K*EɅr"Fe"l, >e ljP hoI7>0(o`v*'nm*N< DwQD`(GKiNfἴ N 64Ϡ$H>E{CjFOTLH]L4,JޗK; _gUɾ;ςN:S)^!R9t1Gi1X,AJ[/L*sc6ޤd+5kఀu,K>cu9glX"Z+\(J e6Q* -ddOד3\!$=˳(Vyi/C^6i߄ljJZleO选{RKHlj˔)?1 m OAp/"/W[߽oSt2t?» ՞̮Jkȼ/oKK|Z-hdQ;b]bݓZחrMr'Aj=/]/b>bVz))uZY,?m\ͪ7?ON\a ^NS$ ":bj1OM_*.>(f ap*7snyڗ%V$1e$@6￟B .j/+U~ zM)@q۾-!,)W 9o/ \!@÷'"~J˘/'_ϧ !6z,>Y}Ď١'xYʿ0nECb(_3g*R[U&WoX#R f ?GZb0 MGD UZ/kqW*e;?emє39A~*f^heDqAnj{8m !`̷ڷps0.(:=NcHXsƐdLAd=6c|CC~xfK$SRkg>uK}J_ٹ8CXAC_jX⅌?֪4}2a%^nCaZe6ldW}\.Ϯ+PٗVo)jv:y=)fg`;kB;ɽ}&C!o/g׏r{9`2.a@[o}{ծaVq=(mkΐhЦG)4r.' 긲^_"[0e1&9<(Dzʤe|p2*\iB1&."ޮ6d)8P9Fr{.~EJyhB 冋\b錑)lwi/5a׿1eƋ60Dcg1"㑴KМk_sa^;o.5;j8cږ<|ut˩o3MqZ pz *gK({vəqF!{ ݄`esfahڊu$όgVS%VhJobx&YWWWUWWU@ArWj^*;Irxԗj; G="w5S2پ&L{ ~vs˜r_+0O>n66 3w P[IM]vaSCE.fi2 @|uR:w8 <{>6sP^} KjT :_C.(D&RmQu?P- ` ::ULl~i$sYfbsU9w1bċVt<^pBj5]h)VR a=b Xɻ)5Ky0}yTэdHvL%4%G`KȂ Ҭ; hng˘;2jP'F^v0 I:8Hh /g2T~ $ v033V|T^oz(HCy`OpSx`Vy|J{Xۯ@u{[Շ_RX8<ю Ԟ=þ=y\:,iqM&M*VcwYy /)gL6#`ᓋMh+/}]uBV4d;c}lTY5j 1>b$ת@3%5{e7s$A&ˬxp4q{^Ozˇ\\oکޕw6AZ䲶- äff_ (,bܘ|liDNx4];l?BV]#v΅/ \,Xb]k$(a{aUe9 FgYT!" xuTZb%*a`%ie쬳bZBCB#A TK/f2o÷o>TRk9Hrri\n!@'+nK(ηZyw7{,6W}{,_O#V]1@3GLٍ@q2dO˂JD? %ݢ nHs1~a$\_¼Y+LCQ L)DJp4%3Ղm/~#8kTƏuȹU;A{mFʼn3whbsB=m۞kLUÐU5:N.N<S d&Ew2C.ƺEp;]ZxJ "Mgݦ6UGjٳf;Vh5RZ(y s7DᰙHi=Q,( U5^bƈ̕Evd8nv ([X%m=}߬_sEP@جܐ~~<{k} 5ADo{ޏ~z?lxlq8*_.B3pe0ݟHq<2~ ͉-y7(rLx({_E|k:#%aO~i`2An jG;-4Az3/^,I GnMKW{qd}rXwv0<3m=0R):=04KsRt/GoPZ*}5uXvt`aݫ>Z-;u 5 H'r2ρ I}} k,6U(C]tDٻČDcAdA;373f.>qeĈ@gOY15TR׆#,#zKx2ft7aeڗ>e - w.3䯍CV]͕*>t[RJ²NTXd :(T^Ҵ(d{Ջ֏},{a 3ЋDO7G8T:A$Bq( )6vr6./)f:addƉ䌩0d8DspSc)4KX#G%o3|x}ꃙKن#Mm(.oPn}|tzb2۩JzqOp,*?2-F81f[O}v A; i] V]O̢|#}ݑHKf;E~z3XD9F/\_+OOS_U9]sm㶯I4c,&)#iL/X"*xHPPI*PLX#e I1Oc1#Ѹ&ݥKƥ ]y FZ``82Q (R14hK,3L6M*0A_<Mf:(NK }u &Y}4H:o~鸗YP{v;/R5N@O~ 6uQV@\VFO';A4 lj5~>*z{.un7p"7ǭQ8{0 *6lf->bix`VWJ=zᰉD!g,a8R8d&tʒ8UQӃɦ|*FP#<0C"*4[,5K9sL"CKa@ 8(Jc ENvd4zɑ8S>m"&%PLJq؈I`b5L+(R$ 5D*JH1Mc̴!2#nzDG48H4!KHd{bBȏjr#%y~#&c|w(1Nl'?GNm8fa.Gm#l6 šLk4{Jf"P)Eqza#O ynT npBR `v5uyaiխ1k kx#`+Ԅv;N;ܨwSߵihKKQL&y#.-^VŨ L 'I`Zp"$cQ"/$`)Q4Uxaf.c%Bq6R=o{4_ f\WJqcE$]OYh"( @$Y^򸬅ۚ $o( 7z=n\a02hz'6~Du|]oZ<z/%Ɋ_ `l(SӽՏD {%$.7\ӱ!@RGG;NN"AQ`FSP`v޾ t5zȬ?27J>ܛ18gLrSWQ|iTtyZND w=> d bh@DljшO'cD9WYvMSN?!fP`6P0-G֮'LY"[=ς AfSr=_) gГt)21 2ܙf[+' FVn n<;#լTj5LMph=AUKއjp=[QR.IO?>:6e%iփ^u#/x kI.4=U`5e&f6S ҌqipAu̗ $RN ܁cBna4aLڒ #բ#ValTׇt A"ՎU: ^dn_f즦 \/6lPq hNM+*dc۶l,;FVTSf!~J[L#h,QI/p" ؖ݃jXYZ^Jr̘- vy '\yT2Z@|JmE՚F@+{vL`z;5,xD7L3%t$$V' SVCC`5DZR;L aeԪ( Z^R-b"Xk]TXf9A5gtԐ]dS-O[Z Ya`> i BEwZMÄz1HYH%BH6,$S0a%^[B҃]3(ra>x3`Hݩ؁x]Sa7wr>O>ビ>9uYx{q,qE]z0L|\sǼ! i+@1oH\'f>.99cސyuk'q13 [g7U1c |F Ʌ/7j+7uCE@•I9]/nj:@~՟iyvq7 װ{ lv![ kVXq{q-0]vzY=v9ОKf4hKNܔF}+Ͳ/>zx齝0%,̛䲼9OQJ0ݬ*0&”Pv*]{rwb[?,kPӿ.PlS}7SR7~݀rU_ US  fR2\) !)\ynɀt7іm'Gބefyd-3Yz0ŎLms']YLqfZDM332o/G3Dd}su4, p`Ɨ ɝGXĭ Rx/Eo) EHU:]Z. (<&S*.K n>^3fѫ nt0wrj!SVx9-( H;5 MCHZpI%*5DAs E(82MKAu, mIda-h6櫍ٸhJW1T8M(6{֢͘ZPK,\_zBzvG#*\Oq"ǜ 8]w~ysgҷTogVi_S= IkwKѵ8IjD4Fk$; hLr%O^vP͉[[Fz g*x9 eHP8o}wöܜŢ>"+u+y$p0_[w79oYb]z" lbn1ߙJQbɾk*%[SH%c lUYZ0 D* nhiDgȒPIRI! Xw-[/:S`yTp@n1i b2&ieTMԍVȤhY'a ofd$3i.0nmH3k /L-Dr Զ1On1_o=~xJ ˳9<9 U3rNqr=Lv8KѪ҉,3.ldQ>19\q<9z;KxpYw7Npp(B8 &qe6qHb'.{J%:q~zp*q Y?oJkUpO\pEf*,g /(UEZ1҂Y<ܲkd$24$âEg)K0e)B1/DrN#&1N>E:O(a݆OurɀXIt:BZ<wfO)BN_)zvKtoiBﱣ# %y= Kr,iPǶ@<.U(ú:.w!=`jV[3Ph $fL:Bp3QA䳭g"pb5(y.$|m(1~ ,nwSҖ7%_,IL""aF4݆{K3[_bU4zzhfc~Kb~XL6 da~XՎrXER.l'Oǥ}$%x,^|nO}7$fgI)SΔ4cmѵʼ!qC薈S넎aȯ]Ԭ;\mf_#c@y.|. S<+"dzh|Vz=8Q{b}I"?0Ae5oz9tk.9&^OHKL%"S<d}vէu~Yg35^~>GrCY5&D'\J1Vgn*?GCGz^2F8y©e8ko[ЯB`^]0@\N rCEW؝N֧GP]r1 ୅xw:0 ĜM'45Oa~ӫv|;%Ć4ݻ,ڳ.Vs͉ 씅lw^^[ou/dl4^jU,7ǟ߃UϷaбX8Ic$$qsB| Y8I2rH:#gDd`!\h,?5xٶ0…˞pe2a(Kc̀&; G IMX\g?'ֳӲ4n Xc/`i9ܧE6EQD2r]i6K45SBF'I!IZ%SD+W<0 G IgIo5/{(|h G(:©B'<<Gitl{%ϥ,i|.S.QL$H해MzaCNE7GK'0Uܟ* NfY77{YBh"H9k7/tYonFJ rp2wz0(D#_F _3S %e9ooq@$g5?}ء,퇓 ~-8ձtmPϝzg.ʹ"\#WM1~ɻYZ]82gS{NZ/붣_ABjd@~[^|q36*! 9$Hj4 j [YTDo{?.SP$2%d \RrCBGf<ՇtGb<<^r-n]BjS>w^.[ Ym1)M Y8kOs'_n̸U s[s 3m˞E_([BO#5z9Ǒ̌bʛNx n+pL C@8fK q)-m'A$/4)Um=#|01nHؐCA( f`T!BQ+fq qX k8*Lx&^cf k1nQ]Rf!\5ӆyJ7|? T&D#U4PaRhi z],TI2b S <Ɛbtb^Hn!fjJIW|#B3M8fAdv"/1eQsa{0Wض0'6WtYC'K{&Q/W@xXܟ~|a_ |&us?.g`qR˱R4%y5C( 1b lUVm1nWA\;syw7p)a5a }|5(1'1诳&PǪW+ _O糢eJpD(6~B2-Puptt"hWQn#hTIw *FԋBzŒ w^[!E;G0QMȏhnW;x/WW oRw4CI Lmiv$Ra$qXc%reKa& Tҡt$TsmS ngKvKd'QҨqٔ(gdG`m pڀA{Q]~X%աr,z]QZY~WUD3;}`(o&?=IۓJX~_ޕ8ncbfAW%nOK^n Z.'.KCJ.[2eʥF,l$9TKx_.gWWA`6w^_׽h{pU<ZŃ7`}a\bepm28qkA1? Kit>)b5XN4dޠ氂uخٍr9,!3B BzER %z@#"(@[RR0x\vU2 US`f0Ҕݭ;ܙt~3x4ɛ 0ޙ~=jgo:(!%?>pnogH+|#l.,Zs~WW}몵]K~N~g3˟F;5߭&v'Cm) B)D ľiyYdjPkkQ86ao+_&S8CW#2,bѕBSuD..Ufdj&hj]~_ԆꪭTkJ>e ޛ:\6݂nvHj>,Lްt1H-BpGtbhA>ðrZa %#Lڼ<|=b"IQ-RPvF1 ^?9f$Q)o~S7>tVp=[@TM 27%$m? +r!3טFJ42KN߽>(~3:?9!oVK5O~?Sa k>[nS.=àdOBh/yWrfv'jm;th!::rZhS:?_q|OAzڨ*ϕfAfb:DxdyQf⭯WWʉ4Dr֯Iמ\{r9dfB voź\bZQ)VQCEDUSd- !UTK `Ky7OevJ4>Fͬrl$'NJr{CmmUݧddnfYt(,;F&yJujkY>ҵlYp=[2g-ϰ-R=&92L12DG6^+6㱘ͫ3bv8*tG4y\iyzVq:&5Yf H-O9W#cJPףZtFK/tL3m3k)[bj A [+1ଶ/XV?2FY~Zǣ}fVI}nu'ԯjx>5plwj3lm¨HIEgV3:}9_鿣m{3diGn .Gi3%IE'@|s # "R:J@b3rna@mǎRNľQaH70<yQ;ۡBk3ђ\_/_vKRՊmW-GZr{>Eu@vq6M2| CPHc̳Mݬ+p{9rڎУB`1Tj $Db85cԍ`>/)IxvB x$w OWCq(Ò`*~ ::%+ =)B?*OUIsHH71f|kiBn9Yx"ѧR/Q\)xUKqa 8I{5%!eDF.%IyѪC&H3jSn 21Z~gtNUv8-čRf.4>.Yo2pG~€{;V Кܻdۦw:m5ww#.N' պPI|ݝȽu'N $.åB&eWZQR>kS=v բ G;w6%TvWXT+sUWGRv9ՠF9%bԺzjA"C=3_eڱSisY?.}Srcz\H&#B G/eI>)CT"EJ{ܒ\3L ؅01,!\G`0+0Z1(c? K? h CC5G%T+C{a|k7frZ UӐDa>\ 0Rqr$ >دE4,/оC$4@ ) 1;2ƣa rW\(.DG %'x}Hmjz!MGڳHXR˭Q@74^.` Z"}^x3[!b@H-E& )$B4 9`Xs&2|Lb \Nl߾ @fT I pH 'ćaN|_CD (\ [}-_M#5eX.nm7Ԋw7ܲ|b~Y >RUA!A IL "8qBqkmQbH 6we͍H0ec6( #ؙ{=~]jqGPڞ )xb$ڎv*Kd&A^UP9reeN*ߣ$ZYKs_"# hAr~pST2y>'p^ E1M@T:pa3\h̩Ryh-#_hH}N̒ZLg+mCG_#L~Nȕ%/yE4m2MBפCfS4+)qA(gީցL|+xBeAz/x `Zuj>X2l F w2 6Hh # I΂kx>72ޏ$(U1nqn e+k_ 2k!Dsuu=3%|@kr^;@pv- }$os3\]l1E={R#0`QA*(DII9xfAKFU`yTgҲ ʢ7By1O|qV||ʏ'L@QΓIiʍaGu{z(8|='AX5yI/5PA^>I/]/URQG*+؆$7諢``ch1A:f=ߘ6#PҘ^#ΝʌLwcTa@%33[(MjR•9q!%b`#%(/A h_ZnWD5`h~nښ0T K*yuT٘ :qhaZWc^WM!ZS4I&ʝ氲i 7ǵC7^- >xeyM5m ;(nAg&^!)B.^!(qR^͘O$]j*XbMXC4?N6 #Ueu̇1=y$I.ϫrcbeT`qTz.4JRĜWjk))pGy~eCbL-_)j̨,xht 2#,Hmh,& )6~ uas$5I&8K! 10hx#q g Hyg QJ6S1MŒŀRPbRhL eȔ+VkZW) ]5HK! 4mPTU|Tr[/KBAW}H7ouSwR\PVob W\:tV)H|!Sc֌ fﳠ1ӥ;3h( lڸGg 3.)Q28!Xᬂ\SSQ‚$FQNZ8ݗ`A7B%:Y-ўfLzR WRTN9[ ֑)9A3kZB78\(L32hw RRZi rgt[k)\6P IԡݍB4Ek;2 p#'%vŬT9CHS];$r6̾Fω{B.Ш+:#(c4"3ޙv$^:3fX]W0" &2$m. 0ML %D違6(t7"Am"%E$R PDV GyD h@p]ꨕP*#Rg7jp)e60ꕲzh-Mұlׅ|> 6BCQ l.0Q#$[R b{Rf+Cd3]ʶΠ"ݎB)CwjFfeSA<3Z~b }1|)z4 e(PW(dCb`q{6@6.6dBTJ|҅hPM mh ͬJH!{{iu&O^.~z 4(1,sg+Af5L ҺjOzA8:h`uQuMx~dU3! ?@uKAݏAr N 3C\ HNPjwJ ^w843fh sX\ &6`#cJ4a]]8_c1_'wg?T^ 6l͆0 $tx `,0eXi eiاrޞ)*D -h[h nˆgqɉ(jxLpf%9yC]bO x ֥WbvxE&_p|Mn^Nd[< Q <gWWeul 1"x8 c(bDMR*Nsw7pxLgnvkg0>76i^㹷7ʕ&lzvVV_6y2CvhK!R}i,5yɔ F呜V]SnM''rN^f:EsK'>z2Q^܋ڗ:jRVh$SʻU'r!뺲i)-Ex ,Ξd"x'YIXƄB8FN[R(s]ӆ@+ʙם5rm:v>Lϐ,N,U#Ď]Y1* -孙>m`浮r^rKT୐D'22BTrRXfLI p#LIjӇo^vxwL,05k\L.3||Ùzx>GIg)iJǕa(hC$k!`=7mg CjGðP;F|W֓;w<=QJ).7J)9&\JjVrUa|q0GS219j{OS۹}[eQ,9}VBK 4ŝۆh0ڟ)?'Irs2g }Gd`ނ FHBo29Z6vM'  َ$GXSl1=ek [i%0J}օSKgq{7^ n5]&SASdX>x.SsS &>dgZ5CypOh~<7~fA&\)[& s/ֱ02D0FFaK(6xSm7xj>V#ej4LUIحݔ)syOt5ͫy5@*/ToFyc$2yPA _f21C4MNhTO xc g?%|BrcAlkp7߫-|` \Hywz,k\NWi06fő zְ zefߔT z1DSHt!6 Gq16R_M崻!]ĭVj/EzP=Ā"p{]Ѩ,M6~<z7l']9Y^@!r2Sesu}q&kz [nnGzNDAI"F PgMϵ0N|r7:_pq7Ch2k$vt3=A|p&gvzv9qg˦ҶnVٲRekb:sz?tHp- ̺jUȩtL(e5\@K*f<(A++*A#"d9E-{ B!Bw =VxØtg6ԗ0`Q:ʃ@pN`KU`^TyHR(> Ot!TۊQw)c|{1 Q#.F"eF1 3&JJP4,F+0nSp%ф Ȭ$HzE5ӪHKR]gNToQ%ﺗ!3;.s*Z]Ȋr*ZS2dIύ%W dEc'f~D!XBx Wlj|oMA] $'dp6%!V3)ɨVCd,=k(y䥂07B9~spW+;@yo 0_ fHoA*SS> m5\V9yu^d҅G _Zo],#RCV@[# 3m/Ȓ&wE`|ZԴqĐ_߫-=)4إi~9wM- ;?kc"Uw@|R)@t}F&"5|+aI ln PM>ltmdTs Zѽ: {#ceMA{P2]QF4 pZfhcE! Mr"%c$hoL6L1 `NĨ9%3a@LzrF]d8-dc˒ƍ\* W K($+. %S՟c|ogV93iquzįwF(7^ Cw}_n,/ZOD tG\e#|#>Zőz\og$zRvZc1̭wC2ӮE8J?M9TkNmvL~ 1#^j;zLΣ4fm0{bSc҆mci2t^Vc T"* gbC dp:fܚA&%{Ol|Mu_B^ e5m'_gP;b}v,c+m !oG8EQLk~"2Y{hc+<3JX KdžasB4u km qAtK TFvХ1kaEtɊ3?3PT"hfW[QD4 U(K-{{JN P)*LR,emc9M0T0O,.o_sq3J=S'ҳyˇ)ѲCg}R #"=p?vp!uS636?^Ka/h,SJDnw~0_"811$sM2$sMZf1 6J^kQj*Z cLia&R0&qBFHe )l!]u%Cl'/Zg-`D]O|/T`SxVviӡLę"xz  i#Y0l@115Tj ę·a26r,Ƅ08`a.8"@DRkjkEKayTn#YHXɺzIZ20=f\Ѻ8zL Axđ@rXkѡ Ty)Qit) +Z Dӌ[E9@0d X T`Āg5\үg]3VX{sDt$zXڞGTJPňT>ze7M%$t쳪UO.ď4eMu]K|#`d`k)COv>)cq]G"O6oC ntzW ̏Cm2)XY5IPBw\!>E)a=WG !=ΪȞ]ɹdPh*$Uh z` (UY6lSعt S`6E]qLM fG(S<}$KK!˛k)ҼS`zP#!XEVU> { Wz{g}&T 81yVU42G@6`(c^h%ԤYv8o*c{8KF;8RIaGWe":,R 1MliůU~ ==>a1%ΕDΟ򨈥3k6 BpîhD4 q%_=#k u^ A8+B.J͜[!)OP6sa_Q5Ӹ-<2iAP*uX92fP  dUL]-F 0^7w*sgyuϏUbʹ42XPǩ 7Ze6u2(=#Bc8S8W/Z ŤăqGI|TW7pC:]L7.Imbl u5*/ZW%)Z}s=ꂎ#'@n"]q ьS R*jngz _ 1$1u¦5 aEQ'F״!RɑyX<>b-UL`.e/_!_6}J$b_۾!%}cU;qhT8wt3f9ɯlBل bN&_ń_ل n&ԴNsݔz6u^ZXC|- SE5_MB/0 6b0%W[,Qn+Jԙ"6}|t~hehﴽSJQ-Ya6 3Mlz|daqpo`}# nTf0ņ .g)Dmay`f?NkP~\٪_[?Vяnߌ~{oGfѪ&,Ηpk(mCKQ0:VqA;vGh/}V~4nfEs{o!~^]WꇾGUSOZNQJq 숒9ޱ[@U󼛍N0 cvu,`l{ Nxtd8a|Z3߰pH{f-iպ>) :,OMܛwpeѼnLkݼoLn{M̹j|A-\E)k3 w f ,"quf4kzd,2WZ)d7# = I3Y(ߵN_N7u{ mXҖ\y[~q;aj[ٮ`q|μ/ߖ/Iϖ4)#vŝGQG޻FGJNSZn.3NDb،}gEb%\6Dž9yH,]#9J䅣wFco4boGZ gןuv{`ZTXZhRq))F](q+]7ކo5:Vՙ[+\`Ѷϰ[{F3ӥ0NMDzaVRܜ,us5nzQ!:Usd)h;)h !?v,j,䄵R*'d'*ِcE)+e0SVZghc?C*`;w W΃KxUUx]%;=z9R9a''k}Xaa(MڅYBg‡ ?Cr.Es` MkCz-YV6+ gpS%+gJBJVL- CKdPԗ*id@wZy]"^"'fh33+ շ:Ak^ ci0pќU vr5i"XB8[w ׮q*]!):W4ͥ*DeⲉGdIA,*sNr C6%lȆupH89,DCp( yrϩ*CU ]"s$\$ 8v)CfE`rnCz TÛ;ܷCIIz)TRd,s^/IA<2Ri&*rh5kQvwUȚ ʗUwX x^?+` |y٥R&:zm6+2 <9'YlK"c矃4 a[$,fݟ798O]*iQVxPӪe* =4􃟾s`Ȯ‚FʦCDIPN0)nT9t2O3O鏣8 0>4_>{&ss.pl 6>޿)z^JMzFd5,]nL)l3-PZpT&K)y1J= a~|d?Q(rc0ˠ>JCdLIT Mtywl>M/Q- ٖ!cLK#UiD. ϵL[`)l,F z9TW B# E„Ӏ &@2a 0Ah% 51 / PBZ=cbOz#j$@(c+lfbhidS0$@]Oh⮅<*u5LapApîhD4'_|kS'‡0],(z%cxdz_={QoCjc={e~][ζM%""&HP&ʀR3'9S"B3T h UT~\LY~4 O9qoF?q>ZܯBiFf4 Gpt՚EaS;5M./$~MM<2iAqL?{WƑJ^quGc[< :E@CN߬H60RH;˪+ZLjog*c~35,&Ԏ1;S,| gיRD }+I*DĆ6 ;0ˑ tN,GYi/;;tesy]򉗫%VSH Gm8/CkJÐ[.ٰ+jyiTr/#DV=C.9 ~Θ @8j9w;{5Tus8f);7ϽXQ[ukR.h̚D]Z1ow"b"!DA`qDXɭ?g @-$ AL{6njǶ^w]Lm^,9?ڼXvZuC:ڼXJzyThbŝC ͋U >̼c!hbEŊѣ͋?޼XqyyǛ+yy6/VC6Cŵ!3/<"="U@5<#C *h(1xV=y1bn8+P|Dy9x6brP{̯ոwbLXX4vuQ#ܱvxFUrf7 ńBeS-fj;KjVlDuvj֐jvP:Uft^^pŠO3f:x~ύߞVL-uZoӪ)iCrTSZ?ll#ONhuVϡVK.AlO5 ҹͩ#u36o:>&ĴYuZ_PQVKɳGC^v9Jxy{U:b*vX$Rm(*EǸ5 - `EZB\2 MuQ#u 2|ʐ+܁ R^'`%؍O_j*_>{uZ)&EHrm]ٞjˆq kX$9bZ %@k͏M1[0KX g6u+Hk>Ow).>v\M2Ǧ]T;Ɗ4ts? q8_t''1mT 6/{|v?.߆EM!GY:OnȻZݡwAcC>Dμ[~GCֆFd?V6nd:ntv`̌`c=ܪ%Bǎzǘ[Iǎ"ڟ c̭ZE0fF)=c̭ZU0fF1v-AĘ!cG13X s1j TØ3ǘ{UKP̱=cmZ'Xc> Kz@;ܽcG1 Lây"=ܮ%pcc16-Ab$c{]K?1t1sȶOcV{]KƬ=cmZ&Ƭ1cn4UO W?pid Pdx%uʖLRjc-h(ӏWBD{%B$H l@5{NQ4 9a%+`:U0Lއq,M7О>Ot=`kNTȦZ׃)_QP4$((&9W   VD?> R $,\*d` Q)p,b0 (f 2FH 5҇BfPH%' j5kk1rD*/Jc5`1a6`]! Eycm_LޕBȥږkx(3c$:'0%0G5aDZj\@KA$2ddfP3Ւl,%=P•qCIrUgӫYt.T|W6=s].rfxoTW9W4e: rAyuѯC fz> I/P ͯ/E拵uXV Ϊ|zw&kD3$_'B(|8(>~=l|\J/rvzѐzq"er<1&|. 5Ւu;3"zZ?4[MP0gw% \,hCY$.Moȵ#æ7dnJ4(.wg#9bQZHyjV10oρ+Tem@Vl,NYK%+ q= A2 J5D fu:(|64`XPMޮ$ v1ʅ<%|(<'oWBAդ9a3/.8yyI"*lQ4L8lOE6x5M91ύF C?! x*U!DA0 iޤ"RDVIW\|[p2V蜪ܛ_~Ej'J$,D]x7ޮLo'㋏-~R-~J^o6W],oU`.+: AA՛_ːJٳZi 'fT44FX_2t-G%Uņ[~] m{j2xwV`em ւ#E{{^Oj JaMd [r7gEPXr%aclaHR *${'9cD|jwRԵ--w0 _{AͲx3-K0{u&=Y؎hK jZ>J|JgWԎp I BVafKib'Yr P_;cy4VR,mFt-)RG(88VR '2 a$42--ػFn%Wdy/ryl.289I6ɶcˎ,%ؒmɎfcXbX,8YF )e6wqt$z̉ͧmrV\BJ+)h+Cƈ˴)֢ɖYpi. .THȬ#slexnYٝ3[!cb6Ƭip*n6Գ<$#;ge;< 9ɵ % i_Zcl>WcElT>s>U<=cTX]žHo 7P rRdMKYu FE~_ßn}jq0MKDv~(=K=Fu7 R3~r7''Rd+fJ-BOShMP uNwoCXsv{_ (y8&]ᓍF9PPk xMKe8'H"2n݇(ykdKIPu/TP_U2?G^h.i)PD*~֝[||onIf|jўY(,DæQ>yER^~_ZI] \/֘\.i|t!'Z;!Tb.K DkNF?K/ux/(qL٧$)6IO'Li}M0pJdPï.qX?fd`Əy3ßQh7THe$ꕖhnRP5/"vjX"2 Iȱݜ%H(n+2BE]LX2Z˕Bf)rJTϯ *tNǡaB:x8}[輐 テre U,1 ikj6'D#$nLE)㊣(4&vs*'SDifG٪6%MږUB=Hb>)Lj&\H&Σ4:yiCEȾEkaH_A]-(bE*St]Wm8FS;yXl=ZQT9 b 1k|/>YJ$ZKx ڬ:c[b%70(7匄j@-krk}+F6}u4Lnq}bŅHD@'ڄe@A&5}l).(uo])IQ[B?P>t+voM\-PsIdB&N%=n34I텰Kpwdɪ?*.5}diߟE6s}Ol>m0lSI5Tœ{|*]LPu,6~1AʻK\lro X}7Rm'bxi-0wء.ب"3??*X_Kf)x{$h.KgƣJ7"0vjcp҃bڂp?fqב$4h7bu[Zy[K#n}VmZSdtlO O*t̸R}⡱qwGR@VA:M.>ns҄9^;鴘kjE"|jF=~)nl3PE]߽'vί&DtYfKғCx+/z>EܟF^呁TsRG 7yOvLT6M=RD>՛ ;dA }kxd,Ց8+bc}tY}mo$XW.Lo]=C.nTV~qźwOc.9t0DHONDKsX_YCĒz(ffh$RsYJ}QG Rwp3TT'#$;Qb.a5h954/h_0eKs_L0LH@eU?B57dTpJkwiܥIp&]ݥU|zFT IR0<穵6sFy)ifNp9:)O~ 2%(QqԶ7Pn_A wkɅ"*o˒K_tq9֝`b{k cR5W2l8Nf_7~qx4`;T#[HLZ&BPȌ$UuW 3tULd)ۜЗt 6͇;\{AV'՘ P1N! (ݓt lē@prۚln2dYBdS4'} g\Sq?rQ67`9qs[ۤ˝+ؚʷFw_D2`\ 99 4O6)['(O"S`zI!r(ynh$0&1YQ'gʁAʍVS'L"5) NYPI(Hj0 Egj[#K'AOu|9Aq #(JLL<R\s%{f m;j߶gqŁ\0kObp6Cc+ߞoϐA,B4#ޞ*Ehx5KF&GDH{LQn\#ȑ(bQ.󆑸I02FӄvT-?9eQGH=њDUi5YX\>gL]>)`)T[̔Ӎzw]X\%?_f-m ۡJW |/_.&n:?4i 2IL'&y-,gt^}Lj*I}\ǧGQwH/Āթ 6\ۧO]WiH_7T4jZ@'Q|H: 붘!s' LA͑'9[s'My0 {1ceh9嬰ez1Lb0 aR^ sJ\ M5;437 Lfg:s-9i9:Զg<(JBfOb3]ǡx6J̝=6^IS@KŞ6:N dLX4JRPlL8@~\Qnhm6 ^1;m,lI%5`^ kTO]mdGsDwL#mc eVzUDKIRAx8J!(J-[2Br(Rƹ"'4e B2s*+ \J'"if^uإl9R4)%D=Xn"=ѭxKaNg?<|I wMG AޢhmO=F3j6յimE^T zYuD+~NْB%fm˴D6oǣf7>'_LoeO/lE0]:xUxtpV 4!wE-wC8CQq%coCh&^q- xtKQΉ];ȍ.7F-xtLH#!ʇFhKBr5ߗofks~}DAhv qa|?%~[Kp i A}bh ʡ^q. hWDw\1~Fz@;`6J.]9fښu_Qyhﮚ=gkr6%\${IN2I/غZV_$ۚdv |A9,hTUJfKycu&I^5z4̈́!rf$V0f+JL174s It S?ï~Euլ\'TruFi,1csθ9 3gR){͈NrGt$փwç&`>RrבMJ7 xN\oQՖ?Bro/,Rjh).Bɣ_ z}\ΧOcrڈ/ټ pS ?-o:;ejިHƅ3}-l7bVnrlҢ9h cQkOJxjIC޹VtJDJ$lɠN> 8x<MfɗQvx< M>y @tC iauC7Գ&мTh^rsjujOk:}C;;DEPcG&;/lFu/Jÿ'ҷE Of/J%ztK)~|-߁eۘcaCJ Aetp2KQ.Y[M6Yafu3%\lˆ#k!1@=R.~&t3!!t,3}|"ڞ<<m31]Һ֗Ҭ1дuQL0 9/[[cTSJ0d$QSl <_:ySK揺`qTzgr[u\}Jpݟth3usw(HMУn~U< ťJ %g`9$?S xFӲ7!VR:#kW<{ՄdE>NiڵĕLWMώ]+LZ'nErCEyvٺ h˝Fs1ٲ3+ii&ZyiUYޛ91R(!Nz4-cY %:K!@,\g "_ ?\{tz(Eᢳ A R0)ݪjg+fwOМ`)~ݕ>)ޔVRL/|}L —0eSJHlO4\JԺ:-dK{e8I+cmR=\+C/eoZqޭ<C_CBbԹ*p$j[ʘТ*j8;TQERPC;F0 ȺvטRf~0CS/*m!E  Y +rǣۯvn1.H] g Wbx)8TYtGgEC9Z=\@9ivYNPyh: T(KtNt¹AQ `w#V)?П$'1J߁TI[H-OcVa|'TO9#g;٢8lJLg ؒajM')Di9xk"5VM33OͼHZi?DMȱrXK5CW#`հ{neșpDRȹ&Y dNYq IST ,S뤓ma Pr^.eIlk RFu֘oO,n7PGDN֝z&OMuoͧmO_xu9ʮI͚jn.{l&P` 7X{폎ΎLq|sVNZ.۾B7# t׾h KQK_t#U7 w qz\DۉԱ[4.x%^gEsMlc4+F˥ *{>}ЯA eHIqQbb2  \K4pB|{BsԬ2r~c-njMK@(@v[z w8$u{U ?a . Md@26FHdc6DI1CkUwx4zW{I!8/jWczLw&~J#NZ~=kAZ]>Ω.W*ai NR&2E3H:^-*"4,E'PVk5k1Zj9~DI`;^bSn<7UQ֞wzŮ\&2"xsʂn;nmk?λ@*9o&!NdeL~) Y}ni[oݍflr*U;)u"З3&uP畚  s0QK+^9LJ l|7Tm t χ+_3~0%ĒfZߜϲdS]\9iFOӹc=kktNsjB(v&!` ; ezsYtiBc~`O`Q>Z>;a] ټT߇{W1-NyCՎ*TϹn)~ 2*ƋhQxA^W"8b8,{ˣѫ@Ȑ#@LCJ95T+5exdB_ '̜MV&L[!Ff,3Ёm#r.#9T)YkÎܝV̐m)`R]&Li֒kln[Gq#:p4ǭFp~#tuؒkC;qލ.hy ^]1%|gv@d{t FBOU>A`uWWs-O)]͜>'-)Pdgu,ξug?b6"b;2BG݆r'9m_ Ō/Ԧ.n3C7Qb|һZ$u읟1ԘW]A0%qrekk¸@Ol 9Sʦ-${ICLh0Ҍ BXnX*sgDʌ +Os-(3g ?Э> 4FМϵMb'c0CjCO!1Ƚ4#U\!-t1ۛY5!W 麼K-z˖>bh&.OܣLS/׭*O.{] ayeߵ=Mi4p|q;=^]^RPCVF_t@A )~cps0wa?ΆYzTh) ^t@SguS8ξݏyX=>A[!<!J0u1þsҎ1A5# H"'[8PVḵ{G)ȓ )B B B B) RHI JgF,+S)Ʋٻ8r$W~Y,0Y& 1@?Әٙ ںZn`UYuIcB\dLjYa.sBL$"*! )|C;TGuV΄ 6\. Ғ0;R(] Q"lAqd@g![6{\/Q+}L`JٹhEMLZ$Jf%+_Lc3#!φ!<3Bzt(czQ;g,غ!]@Z,ؚP+y&a(w<" JXzh\0Fr^MD#E`f4%:]accƖMiո' f8H0TYn]VBYACH D ,@=B5; Z'ٓZ-uO)(Xlkz?ua$4lWvH#)iiW qٮ{Jng8 'l>ebُUyc6f`qNBϏъH*Xr =bוd0L3 8RJl6@}$"N /(ް+Q%$!l4&р`=ۑ'؃xZiLPU09"ikx-KJ٨\&~.X?F gȳLKk€.QLѻBԬ~T!CH c3k!KS4dUKZ dUl$BEH$x䕴 sϱa cIda!:)F_ǜdfP-7z[Ϡ#`Ez{S gHi6q$f^D-k# ,l3" gWmmGb{)]6Έ=/g q8 rX̜CZ+'ή'8UgW)WlzHUH8{sn?dKtGuG&̝R߲JN߼9Hk <^eS+A)w1;ÛV{UӘgqQG+qXW^ z=ƂxmPa.-Iuu,x"؃ ><=~ҝjlxy_w 1ACa 'fl< CҠl\{~l%\uyY+jJ8V6SMt{a2B+ϫl zD v$g@XwH?VE}MtM8ʂk2>}Nk|pO#NR""JIZFp X BB*YzmKI;#üs77w|ލ(|qnʼnWkc /8 fb(yH~׏=:Dk4:@7?B"=xye▁f5kv"*dd[2 8 F2ԻQ̕|׋*Vo/qws.ݝﳓ?_\lc2f'?}pzt>7(`qWR^?R^/}kϞ/?29+~s=!Ҭ'274-Jvuּ^Lwl4/fF_7i^{U6jъ{~j:;0Ic}x7؟Ա(OGYoMzbnmm 7 )w'u*J s$ Wu^= >K&!4eӰaw }~ GoIS7Og=+\~$^=_9; o~S)z|u+RtG.|IkQ i?Z%S$P#w? R BC.OpbӲPot3o-8 ');H'Y. aXI)K VCQ^,Pbs|StG>Z3GL\rLًZ0ILVS4Fh,I2INp Sk灱KZ?/9eqp/+^j;B)$Vǝߞ\q~Sv'o8m4=Iѿ~RXԧ7s#QIwr0#CcGq&uBĦG?V*T'ڵƫœӋGv:߸8L Z%2RmRW[m!Khth"~u^~"ma6,ny:ȥ_k/!0 -jh;DJ+z, Lc1@{ ^jZ .y4cԣ@y? :n0@8i&j}LISǜrM( L%6:iD`Uּ-}͞w3!GZtXAm< L5'U#KAڱ'IJ)ʎYIR̤`HhD3AD&tlt]r-hf'΁'AY-| *N(R $WvdIp i79!d ,YQ696JA>J)a{QzfO}MKP mZ{"`ct7Kt9d9]`a#7{k> ^G83sNQo 6]`!_jְ~\}Q= ̜i ̆FN{2Fd֓/d:nOogwҸ @V Ԟj&bB+})`<>ԑ[7J[.$O?$Ѹ݇3'!"m n()v2;v\@Jz k=#G,AQ֍AȤF`>f{/H:H S0+Z]ǶcI@ϯ&O_49+cctgkQ;'Gi@є1WrU'oj4h+sPf;#p%< i)H #S"YY+Rj4o&;"f;#k uYn1S/jOl1E۸] M79/B/oE %Y' Ny ^嶤<OYmݺϥyZ `CdǺX%`>Xk=ގZ .@wt \g_!̭bl'bH2^k3kl le;D/m-s:e~XZ=sd`L͉bq9 &ȺRR̞Yu ]vq~ARs ݯΗ`F{ڳ4t;76KrN1r&s usBn0Ug!w{7bvKF,VVnq#hSZW.@Н\z;ɿ_ ؒ.ڲ ys4 tGny#SL UՄf",tt$DHΉ фLvٸN==/1T:bC,*- 8w56 QU:`ē QMpSP o$"YQ؜Ȃ\i˞1c;ֆz] 9-؊bL3kVxȖw a*Bu c1U^L+6؜6+,IYW u>bCj׉9s_3oz,?ҷk9Z7*^ID3ZcbXxVZ MHnb,cB-LzSBX[zP'fza-oc mWO@h5Ϩ @=\z ki՝tr AHMDW>`B}1( aWP3>Mf_T2g;( Bv 1by]I?v C]i#ymȹO/U>k'V!*1[D&-pwq#a^sd$J qE0[ 'O:]AI\^/vR#Rw<"%Rr8]G4)]e=p􆓦y{ZOyYV0,5Ft|cqI-L4W83q 9Am+ň\1pޭs\1p80+rϫ&E2!ebZ ͂MIbThwY{3jhae]Ad8hv\[1) 7LG!SNUg@;l oMx`"Jd(HHmrLȼ7% Bp%J47ԣ]'YcR*uu'!/];"c]p˒RX2Ŕ$-1۰g@?aѾ"O0AnMyfaw"\rMYŹLr5}YӰ`{Y7J,~ rN1\(S4Lup d婺hK~|u6$k֝F*E rz3Jqy2aΉJ9UU; FOv]9\*\K%k:[(@4ˀMyCc9c1=^taF=&v@<_N0۩Dvv M8j>o8O5]fΨDkS3jg{g.13*D8ǝQQpΨ:ܴI6ևZ5|1B+7aIC{VMT PT23ImxҬ9}X%Ѫ%7k7JXZ9ߘYc-R~nsū|sDNl06gy7xÃ7 wo$6:#ywrorV n鼓,Yy Ct)ʌ(I(x&5RQNگGV7WԴ/՚$OQP݉wI E?1d ݄՞}feBtiAo qNs*S5MD΁hkvF%ˣFOv[=w'(AujaՖQӲQ> \$4ݻ!,gY<}w F8ͮ\ỷUvm_`m#H(O-ȝ*>PQ2Ḱl;E>O]NtߩY&<p܍?8R˚K- kJΒ9K6Y_I[hvYE4vb.Ͳv5^5fg+=@.>U#j/EpgIt 4J*/1w]c4Tاv`~G6%]ֽWRY턢n([OTr="[BjíOɖ+!'[hݟlV|˹Ҳ<92ds;胠U[K-k+,,ݱ\I;V)~0{*=7}a c(?C^4 _]8wlʣ5R4 kl])7~=([~7~=|ʞ0S:4z]ڙϚ#\k#9{n8or0(~.b`YrлrB;tpP>!XV}35l?5_u8 ;=2~:\GhzbFDLE㮔D[^~}!ZpQ/K>k`'QA r+_wD0P^x^tOpnc}hn[m\ыfL4KrT⊔k_)=m'@__;{77\DtBRxh#o &oHo~wL{IQAAOre Lq)EBYTLUR 3$"L҈Lg1E0c)\!*Y'@2/I5 -"yIH ){NGߠ6&k1߿]C1;щ3а}B0/e*Q Y& G!"1RxQK d OAĈzw %^Wg\ :$~T>6!`>ēJ5CfGYJ1VI;(뙽GֹLXXAIINkUzAҁ:o.Ri#ZZhOtH$KZC9+)¬#0y6$)yk2{&ڙ. ^e0,rEir^Aa&2eE5/T|,dsFWb" $)HZ( $:gRHc <$)@R| Ib/Pge7<3σ%B:̤qdRN(nI!+mL.&C&+65; {4#הiHiX"GT]  B!1EN^-y t!$ "4O'?TzWvi6[ ny7vא9ޛۃ+;?D{ 락vm\ ϛLmP6+wvX0 }^L\Ĵ|8 'L  iLk?7OMk:3<}匋}iq>-o:aj ?˯m0ik9黦Ӆ&>ݟFqFuUbcܖW niw :^p J} _&jP*ZNc*MlzpK5~|rWfM[i_+V_^cc H&]W̓$oJ9O5z}=x7^ӴC|Oq|cwn㯏K-|pNnp8H鴔/_!ɱ9wu_UlJ Sjo1+~K 7+Z({j~mC7$CH+띄KW͙4GW[ƞ`=?׻Ew; GmimY9/P~w~,]t;uv{JZtiǾ-K}~Vȕ^C[KwM}r@[gg[u}AxSw3C/}ZWFj+nnb*=tzg}sKv}!xrIW+2H4jhhv8J*9Ml_ kkᙯnqYMU /e 6>3M X7h W^FiqqɭX[?;sXo_UJ !eFYgduFCSLxN pƌ%x(b@t}>2A]yQK!W @R+X0cNX0c3Xp~XpM "i,6ƥc҂TZ[M 2{U&ZYp3Z$Q.It47"ҽgj)kBȽ؈Mȵré 9 .qdR*+d873gɜ%sYYۤ8XeXW E A|x֢PUSnAU1P5T(kblJGV5C$EJf_* tA3~Ykt^!vFˈ޾!!Rg3yN`JŠ t"zxƊ'TȦthjB-&`ts%|d2:[> Y)F&y!U2ɡtzGѝ,R* x챵N+C(S9 ;b  8 Yf|lYf|pH~t,0ZidTU,J!m9p A[Ԏq )Tr@4 P'T厕t7.^oϠ1&s)=NZpꙍ&Et 摯xFI:.1ZGN 1eHHҦ'T`!\Fec8cD.DSd`+@SA,R"A,Rb8 bΚ8pwKcz"L#x!-Zvrx*=RK3 (:='TR{j$x.Ks9AS Ñ@\,j=ȫC /4ה\"""buIwR'46QB(D͋0(4s1{s_loXi a@d^qM sGeW$0p9C;< @Ιg3k9/k^Sv|:8vjy(Ӝ^~V޿ x0fFrjSZ{Q:``LcAhJHb73bD1ZdI /D29vεrmok;ιs5e{ljdB$@rčsYg+\kg YsS%aЈHEM(C9^^꯰pyYJ5ruxkb&8J4&jX'G3#J$% qr&ʡ)䴋O&ؾ$bKNLl36LlGlK?)JP we,F=s~gAeL @J*a<( *#lCWZSy:.2~34GH6Tl-Aл޵׺m:nrk6l }wi> K)%,ւ: $y¬)1D/GmܭǼ$l ?|<%mqnhOG 0<K#KSL+4wJ * a$RIeGprX$C!RL/NQ&\5|xqHׅYgyRd09aL'9 ϜOED==㓄cJ&eڄ[O}~Ut9f-\m pQ[vsWAŭ}p1W0 3TrU4{qX5Kp".3x܊1:&_8i8Y]d (`ua(Q pvѓ, h0QE1S 79S˱AaQ1cyI1]KKb,eb p yu41BBƟdZ t ĸ5OaZl#,S(V*9yi.QغME+ߚmU+#h!,ՀQF05Aw6ŀ`4驚I堷 N淙߾$~.巘m淙f~B{˝d8~;<k}.Q Bi O E n gi4˂{TĦgp]YQrSs) 픃'H鈍 4z @AXZ ՘T`((RU##eJ"@r0&o_/g~m淙f~;?~;O&.w2͌63h3'/j.wED>MGR`B|VXu*OgF|o9OUȟj'hSjF$Μ" TvY0\4MXHNkAuV &8v3,)VN+W!2dHDgBMԓAf%Lh3}I'Lh3̈́6LhGhG*'?{W׍d\d?YIfa10h-ˊ$@$[jݖvKBf7U%gԷӭu >Ԭ-N@[)qq5jxv{gsFZg`ƍ> @ =S/OnZy (7$4f;$.5\%1;LH}fO* 3 9 >O* p*C%Þ[ɶYNOɨ jb`cUCF)LQMAAˠYCN3m-w,r9?[wݏ0wF:m}_?_?wQ'~[Jb|ʘY1ֈ(^J|}~j$ψ&˭ۛ7~2גSw=j3 y_궪l1&tu`d vxӛE#RDS;Xz1a q^}&m;ZhQPc ]['_d!{\.s x.zg(}k-:bqN%|zumk5?pЍ[GP&[ ˗{]ha;:4G','둭d8?gwÓ K{~n;hBOԓ(=)..:{/{tbGw?C wpHY>ELuO{O{O{O{vDˡ`[!9٧e]bR5 BFx( σ)_6?u.iY[9 ɐ4W,kxo}o﷗"JDS8Xq@CD9R٦|tTov-O\qMnD6q)`Zw^`z~<o/|m̬s:lf{hࣳ'G~R 8Yweɼ mVe,]MC&I)1: ZtMhlH9ښ %'<`(tļA۟s=SA2z!XI1bbmȤ*kLFW`ld Qȩŵ35b] 9@5KhotU>%2x"n4#a]~a ڊ?y~7^S~~p,tjം8\x(V2pA"! ZJ S.m-ovj>f7Vҋxvz?[kܐx]Ζ#*}}1n"$qfs8MTL Ji[1J+Y.JJ%7AVơ[Cc sv?D1eȔ2( Ȱom(8ׁo{Nc%6( XF2)r* R..Mfz+͗RL@F;p23lAzV|V'ӡ(E]5z-IJ`v%nYօ"bJImd4|_='4D;ێjuP RD rFw&xLCc=' Yx1@jc!@wwH!5E ܍c8逼 ! =wYo瑨ͤ53!ym۱6lϴ̇d]ArƊMQhDTK:[^9|)^ :Ó!ymۑ6}O1w;kXr]M3]` pdSJmIաT%E#r$`L5тor#t?^qm %|F1E BlWXHqA.\TSo%gS@T'ЂìwyMs˘I9U4HQb,ֆ^|++(`}5R`o|B?Gg̡jjXFAhVW~5;5QV&w1x|i_s5yWخWdX;⑬#3utt$hGP ?1^dT^vR8,m5TΌpevJѪCӻ&TrFeblX"/nd/(k,)$b=.z"dP^vM-c^mJaYIqGi NhmV;Jog,Y?<㬇hkGb;:I׫؍8J1.uڶcm[6&EhW+0/{7z֓~d$'45<bҊI}`e?NWPV*F!WTC5@-'rw5s=Scrrg(E_JrmNy|rM4Kk#@ h a[eRI'gM-D1VBh˓ 5m]ɩR.iM@6Әe"/~Wes|8S![5\xELjmP}G7R ftGR7Xy*W{fUuhkz @>wܛVHkP@+0i= Ll=G*1> L`1tLJ_e7ʏdD^[vMQ;HCE-;r!&lǤlݿGc!QuMPu@C1 `i'Nĺbw@fz> x֟nsuD*%Uk(|-ۂT"]BDv1c%5>DvY=t0m`)$>>n0SC̃ 1243WѬW'SH廬m[cBCcb(ԫ s+ɿB,cdx|@xD*(UM-->Iʇ$^ݯJ ŝ+2Vu ӂqbǺL‚tNДiN:Aӻ)_?(mv[rhZ%Es &8ndb ^m+=wL8DvINJۨc°.kfjFV;WJtR ߲}@T"MRJ`" 5i=h 'V/ւ*ӗD ڽ!|%O#x _<ئx7@O\?ɑ2V4m _lJ S>!!Mv$sq(spMB'4 arVhp ?vkfgs32mƔn#iO}X \[RU#8 MsF`-si)-j Z[An`2(fKQv3)3 ׎ʼV;I[|\*pV1|HGBXi fRD,7&ZQ=V8жrna~=&5% ꔃ{Fq ~LE!B]5%ޖjl 5)i,@He6}G=JA&5N%GQ;5@ϏUW6!1:/h1TX׽S  B0>^ݗ4 aci[`V@hu]m3MJH3^ވ l%!cygFĔTU}NQԬų^F̖y, ;kH2rKhu//Hx0jEdvg >cYIr64@L2+zvG ԕ8bܩ1%$)j$[Ao>2uNj_La12~+ZDp!fxvKR*0Bk2iAyFʵWQz2<ϥt\嫿yy6>ql1Hcw`Ke~wy"x,B${6{Q`~Ǩw| 쒆J~vQ]߂Y?k^x` k]qVdzr˛[|+;kNq|rq 'EQw?n< t{8 -x!_*?&h9E۴\7;Ƚ"R_K`qZ\5i~͓h^?˩6F&VXd2DL1SJl EaT:nM~bd+WwW/|qo&]_jM /AI,oXymO//jmmGomVL?_3(p1R)Nb td-(1 ˙ θVTa=ش ,C|M@3$yU:SI)FyN'G WFjWo5Kܭ$^˽3µaJ'NyDDw'Ձ#bfq gۏRΔ`lVf0k} ׾{ HHs:[z R\䮕JIKIeˍ jgViD,Ѹ1N슥J,k"i%U{DH.h,7LGv[,mXY9і1SEb\w&Z[il>QAk8eä!z6'P>YBp)8Sm@= *SV<Ѽ\ߒѕƬkPXTϨG+mbΞBdNiy^upQ1R1Ya506i'[|&S+ጻ×2׺Zl2<@fVHՍn8v`1Za5/a`֘XXMm#xQ:Gq\rQ m JC DlW1rɿJZpΣFP8EY©ȥZĂw/dM𨠼'ɹs7Ef>-0h[12fVI* ׽e~]6#A_QOl,TDL2aj]O 5#e_BmoUOC%Qc]OM&nwr4d"j%Xp`CQ - 42<($JNQ2 BkV=-z !5s(&piR"QGVF`wUo"f`D͢1jnʟC]67,_win)s@y&9)3UMn( IW<+AJPv3q -q8bkz%N_.>EwG/pBxFgǪ!Wy?," 嶲vdHHk3I/1pmiD`&T#-w2咶\)< ™1.5Iy =Q3PyebS.4ipEBF`e_B&jњ.0"KE]?IEF{/UЂ3#  qAe- uTߟ@aZן JXD'E 8Dt^K.OS\]Mi>ZZ&s;J.RKqČ lŵdL*hLyQYZ¾) b%(z[x"Yp\ڱT$y WZF$ۡ,igjn+ffi!ؗaIM >񅥢z kZ׉RXA8ezX݅Rh"FkT'lͪ^ւۅɷ48嚙]!s\ZWV[pn:ǣw:4#QV=mf*LƄ~{ b d.d Fc;ĘukT5v[Z$Ple3@-eLsLS!Gi17^)ݶwTwJkR9e@ěkv.xyr!!`ﲝ2v fL/嫗/;ߟ }kE<7g2C0Rw|%TDH }hQ;l"oGfR{okOI]6 +y;{h8 ·1RzSasAL<իlWJ?P^ FG#;HDŽ+##]v~X=yxyOFBrI`$?=RNu"3yj(c"mۭu(%ɭ=>}m ) MMn=^TPoڼy)kf}ϳ7M?8_l>L6Or{squDXL^{!nQFd,JW^6ؓ|sGՙ8.$~t%0(;%_oFɸTŒFqPP%rG+\=\cJ]͡O]Tzܣo!V1)Dؙ+.Kq3~'%ge/^xEӧӳJ \+Tp,i[w^]Pgf~JP)rLN2wkt/sxz k>@M*ʞwQJVyx z{%'ӋϾtؒI9p3O*Nw QS:T$ּ㦋顣^Ә9c`z4+<<ƙͷygnY!z jþ6#`uYօt7s"m.Q4_6B >t(oӅH&C}\* oD3Cy*G QVJ% wjtZ?&jZPá5wXcg4WJR;#qWUj'oZ3U-6PK1暡I`hSTt(.9?=o??яRqqO}PS9%\1Mtu5QYǥO(VYt4Q=ݵ(==ӻޖ3A[{qaY-Q3ya,O9тH{<=K2r9{|(VuɼU$2hpUbӺ7hfN09F3e@L^H0&d=M'ZX iSQ(|ٻ߸q$r)X O$RvgڏtwY~T~jjd8[,X/Vɵ, }/klP.~ Xʊ6-vu7pmXn?]^}OWYvU)3^ Wk_xy8 ~^ ^%\Ѵۺ\#sR]S=%޶CS& Щ(%M eFOJZ+Be-}4>p( sBsOf9:*MB%ŝ\`+֝l~:B*A٘2H**Y,ñsT Enldʔ(%T~d2 \aX)>O>*Å8{ad~R=Ig3)TgѳFԑG5/(F.7uuTkD--DA80csɐ ŸΑr]M MrL9t2qѶ mb(Hۻm'SIK &jRA J,<'rĜJ&9e㠷618>赕!SBiVw_V'nGNDM'% %1Hc||[oB(阐#k_i}\ش\2hegοM/޵"Yl܋ާ&1hC*:G M"`GיmE)$6j8T1E|NÐ,3.m}:Ք9"VZSD79AncJUTf̭҈hHxES.NSͬ3ml#Ujݍ qB HT;\2zS(l4IyDSC\-4[%_[c%UTmx(AVǟ5 +&(Ì$˘IXٗ)Rw>>̎ tLj?MƓ9oaYlPv륧j]vIV}cu?y:O%.Z=~~~H`k^7GCU)8/[G@:t$#Ï”&>y@* mf1cHu:y)'U)<^(3Çϐf꩎!UP"zu5k %@ /FxGٓ^VHc61˽pz⻙ٮO|)@pcEU[ĹReOդ4y~\7pX/䙎:R?(J14H*]O\VXD)#.+hWa&xrQՀ?.[tc޿sϬG梨f+3Xg q-;IGSTOz~hͰC9ޯ*p*~"! zt=o3ǯ+mE>D!7TTBQ){zr8&ToByQiA^"B~g}:R!e^|tu>{uwz;Qwمpr-ɔڋBTr&$Spxfu"E)J05_S6Ɣv@u\0 @!IbN;+.셫Ty@Iʛ!u{GIo]_ISsncRS^`TbxAֱJ)^*UO%vT?5&4TAN)T iɃ'w'Brh? *ESC)ú=䊈$T }BޥNu:'xeJs#]3~Z]tXgԗ&ٷ{lnkP&_uKt]~z~@=$Fik2 A"5 rgWL(ZH*r%˷e G?/N֋ٽ\nD:sFB/~Yo A.Vߜ0vUZyEV3zMO~@5aC~$m<5a_%7 ve?Yt9dPܟi{#gB#?Cxkf# f =grkk09*ڃ0 aJC6}hMR_>>g?w'>//?z5+_eY~_peˌYI iY$g_[FS@Hj. mf^tp̎\ d>M>L+Y #YJk(%9煃0eʚfRH P de_+Y\8W*%>ϟ.P~DS03ЖN)JYj" 'i*I,O1UVglӃH+*mƿdSϳAA6_H=Q(田NK\V'`s02ˋߟ.65rr Rf2/72R@$w._faq\*S*8x= o4ifhB(*o}ܲ6d#gz0=ҀVITeok%h#I紊g'T$!Rg%3Q ir6& @+rp3?Myf~ }o%C_OAFyb/>kr>2w /!o~NU̙~[tNLچm%FMlN'j\nxK+5&J*dGԆ P1rԄA4[ڻ TjhsdCޙ_", “ 6tp6V^Tw"^]7U&22їڻkTD 5H+o%Ty43Ȱ54 aanSkL`4VZ3C O)(K@-;]0$h- d\ZC鲙mȱ2GJC%qbx3(n^,9։7v!LT`> CT+ȴ3o_c B:I|況0hu]C%9㈠)nPgE?'!/RrI&i*x_)G(з4J&HQӔ8 u0~ߨ"88($;Pi`B3E?JRFdQ` 8@*Yo+*PT:2m\ %fC22ARl6cg2)X9&G_PQ7{?0BbL~TII*o˕-SV?))..K벸Z\ZO<@K"*I%՜25mTuiaIZ +z'vV]dhOD@kKTθɝ]-f%m d)jKI.* (42z;m+ 2QX-!Jei&r"I/n>+.{]T :O\@9tЯA߅0\OM7Q|wȧCL&\Tʢ6 mDD`^Ý,g<S)Hihhjtq)8ф!CJM&\Y61'\wU͑9^uЮKv]*Bo9hF!yA,…\9ϘIE1ļ ]UvWI)DMږAm0mUts.ٝ~EIk6ory xzqp^efG&S׿C-s@&O'Ys!0֜Jkf6U("7 [?nU =rLkrOs]enoNWW9U&`19SL,X;Gw{6O%vA7v~)4q1xM@&4bn @.b2֭F?DHO:E}ӒC/vԯ1<gz|Vś5Y$`EFؐQj2q괁1BS\{ڭ\PDH\bMq-95{7%|$Xpq&c >|22LZc@KH#Pnv pQ95+T}|)_[d"zx-ݧ͸2Z30N;<ҶӔp* 0uĄٔe(]^Z^l=lrR2AӎOfY`&>uL 2Cgd;fL@y3S7͋%5k AEsN޻PLď | Dp}96qhXfKmT /#b!>P⊈:~qo#e'΄ҔsCoKDr7f{֤2! SiCh̯ R&{g#vх*:[N.4^'֊X&i$hc5?c [d_mG»Q+ yN *1Euʽ˜CYt©v &':\4OC X`XPׁ]o-m+E{((y -AD*Zr,)NKDZE>Gw;޿K_sx> Dbޢ;0Z(_ց7\xB\cS~WY1nԴ\Z3LBn +dŜ-d&gC-6:fv<810hW=|cQx&1n,ȼ6 /gh40%WBpbNVx]b3Jݿ$JE5#uEHOaLʕ]Ԯ3/S]xn3GbwOYTm&ڝtֻFSusϥxW yLM0𒐵/JufKK"؉\`ņW&\d"Egw2s;H=x{3yN:IMR>d(_z?fvJNz&X>?TQbzy lN tD~5JwAN8M{Z|n]p)j[iZߪa2]&p"6͎t9AW`=٬IlvPpOJ{!{5܇T);dE%JH{1LB>!t2h'n53p ."<}H>&x[{e)@1JV30f̿F# +T0nj0UrP3 \$a3t)+Mw`gc~1A3UQe\{aCGyn Akϛ]&^g{x&$J&>s& 顒Ϟ޻jly/>h_5ϦD=o1f}%ЅE[Ե{teȥd*Ƃy1fK NN2/pg)r휀w^p\KeJo3f+Tz[=q5]!:!Ě9~\/\ξLÊtڤg8ŮK-m3{ėo1(ć9}<x䩢)䊦x9Pso37zh&}/=|Ď&fD}<ٹł|W݃I*)5G>0چz9gƒX$FiS(?ww*k[+ﱭG*w'>%䘲-Dj޸-!Dj pú-+8h!R  v {'[Mo=νz NKF^fȥ> ښz&|yeK/D5L|LnDȅ\x'мw7 j|P7K7ؕrp xN{ٝW՛"\v1 UYq1tc~*aXu'-n*wfwW 0o6ufs0 \BrR ⚀$g}^t1f²! b_9tfw_9o |W*Y O5+pcy:&ֹx?qtz9rǾa./gm8^~!{ tK9 XXk]~m?Z.I%̍Wg {󯦂C} 4ǪQ^l٫MjGM3W-(2R+s1tʕbQz~"'G ߈DTvWk]MdMBhُk=S-45Fx ?ȟMy N;u>Ç^Az}y, mdab3Sۖ;0Jߋo3ԇ+f_dܱk_k5ڰ뚁\Z=3܂&FׅEbډIۿҙi4;!Jػ1 (3繁&ljAKmWe|b 0v9q[3髠}錞Y \foY_17?:0ycC,fk'vl$_[NI Yx뽷A?+A5PFm5KBcL~;_%?}ٯw))wܹRO;Ǿۯg]^\~8<߳o?w0fk϶1۳ߝo'/5|_ӏ]iI}:huInOf:wL NmͶyQ-5d’ݯ6r9`ج:zg>挭w1lqvUw{[>h|Ҽ28P2>/#J'D+Д{6zhDᦈ0co&(ncwMj2,&ֈ`Q3ηS0u0cڬ]b):7:YnҤ&leWnvlOA53╾{,,nnO߻@NyCU3Ѳܾ{~0ԝoWJi޺o~v߼^^iap| =Iva 9s >7Iy~wp~km};K.\Wɀ7ųn?_c"f z __j߮>ׅaY' kTN&tZɠIQ 0_nW j8tTiv8Y|}5E?a(!uhCL8 Ӡ1d:. LĘLYAv"^ DhQI@.WI4{=OC@YTjO ʿ5xOkJˉ_ט)%1JIKIKIz Iz)N˽O^WLT;$b#:G{}ZToMVx3JVOpTzK,c8#}ىZ$ZԠ8Vv-1wlv]F{y_Ijho|vnm/ߞۺ'u]j҄Z>,B={y9Q|X&55uK}Q͙=^_$?a3ᵻgY9˱=9Orz5CL_ͨ,;=WcKrxː yHӀŔq cDI3ݘ4 4pg sŦ*\IT4"` c4@9Ll9e1b-釯Gh%$IjDzh:G}3E^gԅD44kPn%i ۾o%{]=jeJt٠#]{ϭ[kbyX.L:fϻ-؅.[fz퐆X~H 1w "Hw ޹7jRb`:-Uu%K8){ zϮ`@a@׈hL0|ڨ`6^Z Y&Oޘ LjP 9&_]==1k5 ʷ<rf&z=RqXmBIC\vdȟL#/˟=fy%ʣ<fT BDv%C"$8 "yA仄 -R=~1}XO5wYbsoSo:IOV @9 OLЁU܉AJީrE4 H7 ~jқy ؓ6`=2CV 7l1B́3Uuۀ$`aC-hۺ6vc4a#g00ƊYEŔF0B#D,#3w$iLyUyQ q(pIµ1PSxܢ ZAljpxB}-`yڼie.tV'_BV{?z%\* 9uX FhT[a~)]֬.!F/.LZ; գ+ԩـ0cxZ{v̧ eaw6틜.(O2ç# 'T!Y肂6#{mU |'Dxg@sK9  S2dLddHZ-Nr19ogS Dh-Pc$3z)R9!!q)B|g2 I|aW8'~* {9DsHX&7Z/IFYVN"(@%A.AJ%s?% "B&) /|LTOSMrbU 9[Va+Y291a?Ϗس۲&O3{$2"=!7Fb+s ygȃMk.+ƣtt7? Yr+8kO`YK'^G0w̐s}4 ! TFh` pF\ [.$:Q 6ZQrau AoNٲKY2?}VX:~S/c:È*=Veoee>ZNX>@|9Fsmk{ ~埙!~4{C (qW, åsp,VCTԺiM{q`:*7AALCd`x]#ƃ XlkeL#P,izR)=vs +݃Kԑ6#qS .{9IDNWI#8rT "8q<2Bxi #@4E;SYr2-`(b -DUQ@4L|`SE%>b"wVhU;_a|0|!@($܆jM†Ě5$%| ^>"#)P4". C,D@$GB 'Tx|Jcma IܷZ| 1vhKL6\߻׵D]gNg.TNF gt"ZmW Q; o:P8=Jj\dcԸHI~e<~Źa0)[.?#ٿB{ط>]ax@LPI*R^R3b%ꪮ,>OwN`쩻x e%8̝z}k"G捙-E8-{WK? ,YO 657uF%CZZ*ƖpM,Y4)c#IZv-tf_o|b<i7093ɸ 9hs;)j/pIoB*xqKX@ګ<(`,%v:O+oe0[c$12p)3)nνYmvr(jckj KFqjTpA[1 B.1^<۞5 d.iG^lܥ'>,pɪ>c\iӞ~f` OWVBl蓌;'H\P%$뱲o Ҏȑ}b"S9uqxa'iδ{iP8s6H-i!/cĮC(?͍?,Z%-Ⱥ@h'˰vh~i? =$T{'DB!#)hZ$1cTIP/) (])7 ~IonzO 71 scae!3I$+toыQoōŧeNKlmE]ܹV"{",'&T A0J夿uoz2/Ze]f8x5\>ǰ cHC 8#xySqY II^kDijSl$v"3 2V 1RnE*Y@s."8 \aaLRmIAm,Wm=^ vsL8)uT&X fc8Fes?l(vg<2P&:"#I9IQcRaU,&GQh,!w$Jka5Lr ?@y $oBo a~ҟ./*ÇÊ1 XF) 5FK"C*!+*WF+p܄ݔzyz"("`(X Ɍ@y ^1QDy?%)-B6f(6;ίN!-ILy~e|A7eQݍ,nŚϔoO VSMEJG@a v4].pƊ 6^$Iǣh/qN13@)T[V乍R V+GU]E8S4[%8\*Ij^5{OJ\d+V>aɇڼA5a̧aMA؀ĢT]NhԐtL bb`O`:FcF4fx `VSuxM3ml!Acm^["wΜ ]y-2%U쨶[ U|Ňի˗`om>{pIb݋F-Zdat}aoY[*tDH3LI_AR?ͧ&8CW!3U$o~||6m $a{,QhϳtWTjB*u;$JEl˧C QHKDX9T h-T%J#8қiɮIsS4 ,W/ԯ1<`㑰 4xpS,yYn-/Қ2[y L0_/ ?8%J^(]7,O/XeɐY2c-,( bnx*+i)iTYM딪DgK &i87\f\rt/FCp07GX1%iIҥh$TNTs&=/:;xRZr~$ +Z& FfDF0Rp6Н7]Qޞ T17ϦmԺ.*y0YBTD=sޫvy~;; Jc.1X[`$៫HK?|\ߏJ>HJr_x?Hhx?Xb'':D᠑J NIŽ'"{QLX|ef0ljCccw9+<2Vk Ӎ'QfGlI$b%E+=d}M=́tRh]7ASED"Z_Kn8;1/w@2E%8N+,#GϿ ;pԇC#AfOU/.zbK-U*Ι`Jk~u~ FHE "r 2Ba,jvvAYݒc-N^o,mqU;{ RkyCBl{bs1!Q cCH(qW)sܫ' Uѕ {“hrAk\>Of}y$e">/R o1Dž cev \}[o^Fwn΃u:g8(-0[&J{|S=u֟sm-")c|Yev8܃~䜧/tY6U%ڥha{{XOwֵ(J>+TY%~8#+_74PT=j- [B~t1#Վ ]Ea7{p47jrBL12븴1'v̱)cTf㛑:(@2 )UdIX]7 R ;'Z %:RF9'=ܑVf;2;T^N Sڝ)pl OwN`N( [5Z;Gh}yM`:7j<[>! Gh?)I%ңp\*츸uʶ;ROxizely?$kgYz/U.Fe:-\8"rG8qJ(wNcC5Xҵ9N7ta]޵b&y1go>I#]T)-X:BRC. EW㖋~6ͷw i,,|je<@KlJ, 9q9PrGcM]:c֮HWR1{=4AR#cadE3LDdX" LޢFㄶ{U%#dCP*4E.J3,&g E,p̼1ZrsE!hs ڼ%ohUڧsH9MUK#+$TK[~=;!y8D@DQxVbA(+TĐ pk:L&]E4J߽qk>R`JtKؼL ɍygM:+#ˉ!w9}xƕiI|z<qI=/>9vSFصxx= 9#+ aOۯg'닑'ѧ@<;ϧ$v(_y+3?KP!KJlvauѿcu) {B~;^DV֚(th$5$p)H *e=ٍ +ă7dթSW)ӃiƗ[I#s 4/HsW)fdzSV&>Qqv9<>&cHPgrNz)AJB Zk%h K72Oi"fb/=@kLM`*HTK& j Gp{ܕӋWڏ:ʖ.X*IWO"Ƹȁϝr XSM|]Hb/t{̰JNxbx@IȚ YYl?P}FA-} N5Jbù\ ^irAN ̉4q&5Uz2yޗ q\_B1ϣ3K3t*:ϯoY1,~`XǾǂcKŕE׏.ƋRDp.`藠?\b ,+s.~RMz:q\h?ߏ<6tebݷ2nxs{G%{Ds u 㛿 6x:U7gߟߜo8}{/(]$S~>)b<9*gx/`#5#xeH(7 C>ⱘ4{ݵ)^,ޟ^^TbOO?^sfӗx>^'Z~G;=nI6w58K&8p5#eq ߯ԓfHQ5QfX?\_ҋbiR8]nOGW?>nv )~տ`&UlX:BвU!/br  ^^ӛxws}[韸 H |/2JLGk$HP]O/V7vlk[Nj_| >ˏȇh3G{5NLg}1xwJQvR^atz4^OwdKib8aFb9b*>m6Y ڏnƸ.<-w7kitWh>y_ PhWG{ Jnk?͋`*LﰢW*z2bѭgBMB hexs@AXoH\peԍO|9Ш{JSZƇm(')]NIᚯ{1~" }VEopIX9oRm}z}[gH~l@+bˮh 5xaqBVˏ%56Xc8f(W]!kB>x3~ꯇ}OFYԢJ(4:07v?a4KC;ۗ?wV-W+j:+,UyCv'~ڮ 3EX?Ю]?Y>:q#NuuGb㿑ΰVGj"1+tZ]"09G 笕"RwY_~Q9QzTa 81?00i$&őBqQkpҏaA"UbPD|̒/A xB@ԮD߮~I`(Ԇ96v)cchc8'8L/Pu¶Ǜ)*+tp* ݋o_?|JgropN8<;UgI&~mgpxN8ܝp8YO8L f(|, -T?G(E_X'ҹCigAC N\cnq w+ms't :% R/~,{c\@}6PǍtyU}0\XȉUBQ"cb!(-…s@uc#J7IB(j7Hpeupƕ(@.M %H ~j432Yf䝉MF4b2nEы!6f9vmZL cO~|I3pJ!csU'Y#dAJޠ4H=>5@S"FͬBKpam7qto=E+A\OzbPz-^H#RpYչEss`$IA+Z,!Hψcm3SI&160 J;=OIx&'EE]Sm~&3(PF'*>@m'F*Ilc?@ϓbfYwۦ&W\\$X\i::invsN1^{KqIJ4CF\)rc[N#%N)m (qP,ѻmSds+[p;i˚*l"AP:dT'G!!Y[wi!o8Ġ&(<^^ay%-PC| T]Oo~Jgd c\kW jVȳ՝Մ((϶=II(J)z/[|3\X?@tgeYlfr8ݾwkJ՜ߵs^敭kDOHt:L zEyƝևk;MO)ӻ$)B(|zɰ:LQ/)b#`KV=D%CRBP&DTqyjS(|d<u囉yv Ҵ&;2u1 jޜ@mIW5A勏7+.$Br!P~L(X P  l0,IDRFPeǵ!FuAٟ/-v151550y L'"%N:okѶܟ]57'gco@`1S :b\I\$(cJ0UfݝBc}p65Xvb!@`;_m'`"rΔpCtpJ$#j'bA{FY)nFK'=o!h3D>-ts?EJ0%ЧJ(uS I*c,i :9upz/z$G2i[ELɆ*]K?bׯ^JUSivWŬ%-B @ϽIҘ6'M:yQs9i-JU& G*˅D-*-.gw&GۜSCAyPJ-<=v&sǗaqQo=ScLlQ&YLFp%պ(j(c7KAzrNrTT4Eڳs"G%HgC l"sv?6Ұpnd]jesUTv=}{4k!y=Nu7}= ]hwH.v`7=]2&^gFGZ\wx Ɓ7L}!^Gi%R5NbUhi*91ҝTtro|(:l#-SɎԏ 5wG4+{"Z"G#杢cHr_ttlEwqؑYx g$)I"GM@9pGKc"*i0 \Ekq"gpuuzeǵn <֢gه+m?=֪O衛4D!Oúv>JICqfRA(鼳NUo[N\̠(U5F);mfؖAi_ 9`ɸ 9`|w%мAxU0[hjԂ\P;)8'|9R}|N=s%rm*alLhC9\NjmހJHSʵn?ɖ'/Pն8MC߬MoЛV^IELF\D^84eÊe5Rms%oKՑx%.a'%'ӓѷrw!Yma-~>/]Urv~t3Quh+2etCAtk:MۗԳ9tk?At;YOITvi}09aAv &h41}d,YszTPMQ&cIa:17$zĩT h`csjJz*.qй1v7tDhSQ_V@gmmTQT\'!BJ,}8`g[7Yb*?yhpVێO@͛8l;_sBPGy4^yϳ9SZ⊑ 35>߆E$yn"0hi6W^pk=FH?eӴ*7e4J~޵>7n#E5_rwuxUd>njrr)%+i^~ ɲhHek4ht&mP#Z3\$WIWTV)RxȠ ]gF2d{L3mP'+jlbrPXU5Kebޠ!.j&nt#p]u6&0GI!/X( )q,4<~a{:_\_3 'T0}ov3%|,iL,drFsksDVYn VcE|R鄷)9E:*&ӻ|Ca%Ttڅ+Ǝ# dW,E2:KWPNFaԥb1U!Q~V=D6,8J%-K_Ͼ *|/(% u~V2iIp5 '2o5ҵtmMǛ"}C0X8}ZKrVQT/SeN|LeK/񉟽 Db;ӍlCׯB2_șIO}36ӏ?|?FׇWCV @BlWn7@U0Ư9-VЀBX\)2RL:Y,FE̱.CPR&,K m>gǠulgӛżmFɲEpx柿E Zf `dL{ 7&cAQcM}c{jĽ+0'K|nfeScx=ӭ@2ٟ3)#*^^͚+^T3dBdUh8̱^0ky*bD,bMm%ֶh@6F osJ682<(VU6{g/ ɭ@Xjej}+g⧛`#wzN;/k0Y?؏&q[.(& n]SY'k7r:aJ6'?{ÃA{Tר )bB"-RAG ߺ:Յ^RGlͷ{ǂJSݞũS.hv2KC-a¢~_G.R"E.R vMQb9Ơ-ژ9g#k-Zk<?'oRYeNt>-@ /OT7Ӣ䇋0MI]dC%8s_z}vVs& EdF%ϕW4BdL4^Dr#xtֆ3qUTjHA\އI ]N'(rtI;& 3+EA:Cph(emN ,"E f, 3hGr1^GOesTr|FYdTB e!y@ Mhڜyy9XkQtANxoe @I8Jrs3Kd^; L*3h8pA$I f^U! $:]+l^T9SA1E3nQG6(W$c!;#=l@*Ӽv ҹ)q̟i0v0p@ׁ]uqdf1#m6bȦfy2r̟ >bE~%KBqzllJM8/ngoGG_6ގ q٭PGA`?Dr4=Zzm0_H-P.cڿs&gэl#^(@hA9Hg7c'p!0)YdQ+r;TyPkJ91"Z3VDDQ@gH‚.1",vʈLie^h:ku5(j჎!PHꢒ^޸% $˽/$[&.X* >\EtBȕ1¦>*-lUZB ʣNq漀.2-XX1 Ms#%)FЫ<[~M;_~?S3 s Jڨt8CZBq(oQK9ԯjIǡNplG2c XX[X1ULH1p;6[a(IA ~%T4kdjjP(=:ߕJjQR E7Pk$HZI GRZ2u-^7z/B< r  ETaXE\T=zrѮ᳔;A>,ft @OnɎ~A^NQ= jȧGy)f'`0wvt!k$Gz`[j!3tR,y$l \{d݁H9aH:qZ N72bֲLl^'9ޞ:-AKͶE7tO—"|˺@KkԳ~Btы4IIhlXDܸ\hc D90#35'(%vNbM'<1$<ʊ'܎z8ܤkf [@؎c?h7!+X݅GGw {v8~Ē R@ͭrn%,!|Pw71-VH2q_鳯?\-^O߫A/7: 30`jY)Coh"eA n̞On-Mvö*߉oS&#y2LYqiI k[쿳T2j!>ca)to &܉|yxė$'Zo?[c9\$glt6uʧMS0=>L yZIbFsDk־z&:Q}يſ"7]ۋ-}Un^h%k?|D*H6|>__)'j&¾Sz^^&]=bbΒ$l~ge,8dNōPY.U6d3]6SЭBelJڿXڿEP#sDFqP*>h('@1Pգ'RϳϳO}~'/ʰ}>phC)ᭇ2srVpJ:fSlFtHOrMxb<˜ÑV21a?y9vlC-f(GjLh Hגƈ|3冕Iu.T(ܪ!&"~v,To X-N~/7oMhk}gw<6od[hqy=ܠxҘ29tљo ֈ`ZJs(@8;Ƅv瀃ń^lh# bJgc:ئԎ3{PrG<+M0]R_&T9o۔ov>s5 5tnhk9X!7*V>N .ynHfLL8Ҍ M bZKpָmhMO][Ay8E ZfDA$ ]ˆ]!kP;m eDDr xqYT^6 (7,-[٦3H5< ! #1-0Ֆ(E!Zc#nP4cQIDdRqQDٓ(YmZ_iRgH;R:G-"*ȌS!xf/h!7Qȷd)W!=;A2lP>rbtyJ=h9 )P*e%Ҧ9[}NcgFnP 1~ZvG\aX ;bfpxNXȸlw6.<`,7KyK0x MF}wqqq5bߛovMϓ"'!et>&ݾod&͟o?ЂBUZK.⨌UzCf_ro/zg˯ c˯g*zLæ]8PX,ɔ4vUjM*u= | R""Rp7D~pW0L|%CW*?ޤoIk4u4~6pklH;¢~_'\Dhsm.ʉ6#:JD:٫6#-$1Ivdu8֨h_;N.y-?g~ޝҦKy׉'V揳+6,g#7rwmMn⓪,hΜvR')%[SLtw?u.EGxfDk4htc2}v'+1p&VOsI!g|DR2(*u ͠~kswJT7AM‡3(54 нY_оl'KPHM[ѳeOE\9rw|$Djލ<{> .iǐG7D[dɛ/afUn>EzyiE\à+WF2ag;"&z?NjK`|-^=ݓ5(s6MR?9x5*@FW±2p"أu*@~w>@bf0:/e&K-SPRP!ͬpI&JD6Tk]Ֆ=B^r6vQ[?rζc~kCrVϥ5p|qdP Uÿ_~ukkvG:P!T> a}xlX9&/[`vbd^L{5Pw*{`6A7?-ld:ޏ]g׉E(gXڐwzw_*. -pޗ.K# IڬV%@jˀnh7NnٸV;G&68[U% b9SacY, b4V4ձ ?AuGB0ѴsQaza 1V,Yi*c0BYCYCα:slh}>  6.sl)v .hX5wĈl&b%3b\kW\ &#Lpte3^Vc\wo䅕**VnF@XåٵTqATU7V44y|x ӯy *plx99+7PbկvrP ;HY;O(xEKU"JrU3ueb(xZ#JQZ~uH= jo= \b%-mtԝ8]|Qns껷4DZ ߒoV6O>i`]kOۦ wLl8ΖF}p);K>clIMgK J99X1|))i(9nJ{-բߎ)t&!nQYe5Zxs:xWpxbNTo]/OI-=2oIo6^فz?&Rm)aJNJ&ńWmr);f/||i>p,V{ûQm4?H<4zV-?w?4^mBQd }7i8~z͖#!'/vy8Eu B\3SiOʵQnȚkFIa2VS%E^J&"7tT466Is-+HVkDK>]hJiͣV"*N1Y3L8agȄQ(Sτ#WLx0H:(v5F )3!qy&;u2Cg 1Sh}{T&@q@rJg9?5X G;rz?.Vn,gQKެ"%zxAK5TЮS&a_prRRVT,IY P`j,Xl |{.LP]Ī rL:ɗ3H࿶*T-^ggPJ›p˒Ĕ龆(*F^(.yˉz]L(Yj|۰M 3nZ)7!fIHⱠ ؿuƞﰳP4!<*4 \ n\Eu=u]v_m\A{,1q Nh"bWq JdZatttlb/soWwZt!x):1W7I@l=\-6;@-/E O$'rf|sr @ 9EZ! -3iM0]qBMEoT UZ^"3JR3mǥƇTGXbPlq-  qϸq^0eC&X#ܓt"/煓܏ӻ!ĭTr]^"$ԃ Kf&i\kkE Nc?\iHP$BQ-Lb!@ p 1 ʽo67F6.83e5 b.5#o hSj[R!9-|l=])rHdW+W%.|KDpuBtV,fҸ(и]w&_!ZV^Z2JE۔l l}ʯĪ!'(yzᓞds -n*psoñ5qڑ5Ǧ\I X&d2֊X# 1Ƈ޾CW ppk0;N3" ef3%m\$9B.ښ6;R H$ut ziGOS}%!(ءv6ܮY$HK u,mͺzA`Uե~ TkC?筫 ^ hZ`հ3AH\XXDI-Fv/lV4NӔ8gi`1 ib΍D0 ]Ik젙$6F:lo<;]*EZw6|q>2W8;)e2Ǚ#g:Z?_:U- I["c~N&^')-Gl.T{J d5.&6S'ƸV=GO/zp3_5h?%KnIgbpwr8"( P &@9 Zu~p O7 _GBߌ_ݑA޽D07\?A! 4 \fN9MܐPo[atg< -!ߨ\4/n>\Tw5:_GM2] 4YH1oC#j1駔..QE)H|3cJH&`S9R10N͹ΤɸB3$J2c?^/0Inv >F 12i"&>یfl/C7v'5UJɒ@T~eoXTb 3Occѝ7A_|p4&79'Ij}wx @DL!7$8fntJss-V8 ?hnWDǏα)16ܙ0l($HfXm،!Z=E]d\)6oB-D}\lmbQu^,cF2sik7 p|B߇@ (bUEtmn|4x |4w0dҧ a?5|j@bD@?~Clٻ6$[`)a\؋6p`&_.))+S:,_!54{3CJQ ĶUuwUuulK29h6/قb BOO!M;t N "P Lq!Ezw]M&K`sV{Ar 1Zu@i 8P\3MҁL 䃑#ǪE*} a_BBu4Fs&!S D zK:1{8D )B|T$E Wu'8Ɣy"ZԖь;X&QMk86BgʉͿzY Og 2m :w JѾcv:FuªN+Nzzy'DXIg5ν^՝t>G9AwA>1Lc6xRrY_&݁]bD}Pմu{>8IewtcT˹¡SYNppZiitBRk> cC_0{ngXdʜR9Pz@X"b l(m968׀pD2RYbLtǥgd<#'ȸ­ЗPX>{|q#]>NCQMOUٷg)W?#^W?ܘ`.Q{i2&K8jDsѳӻN(6)XDHxEJA QKAU%!$iIaBjwZ)l~8ohM366 IV\]9 csD "ÉGJ3fKRîH<#6b_]èv87,όe9PLt0(  {)# 5|h"9A*e7ҡbc&O|6B Ir@ G&H@,T+M z)2Jɜs:p?y J7tUAZ%h"7^6Y\i$C p \y﷒sQQj4 @Gt9kJeƱS8BF E7һco6dkbab׆;|pv>Hwi\Z{ &i\ 9pΎΎ .O0lUo7bS8Q.=.:}Yп( T^ݝTF';#,e#nc TQL6T>xzl @1ܟ 0NV3(hWc &0k)]ЩW+ +BHې SэB~nds%]>zlv3J|a~7 da>2#C'2µG8]sjwz1g@XQ+JT֚YXJZ[)dˑ"!Z@ICc,b롍@:fz"㸻:gBt7:%֨:$+0;h5t.z*)RnuM dqx9&7S6- X#QKX~ͺ>ܺ.)~m\n]Qw}5JDp\S-Fp(eySr:w7M"MՁ,:iv=[ԵwyEBn6bLt!nπM<h+xc2A߭'2_DwrEuL l3q ̛M+i3g*fFHb 1fEv6ǖYeGɇjacg%Czze[*<}OnGV{E-d_pV\JC̅? ~$ߛDTOjp5?=vu 9/6Ԙn`~9O3ǘ"qL9N)lA4H nwz22#"Hz(_oq+]5n8]#;|OxwY J+S9LO};l>Nů){GĘ!ƚZ6<:T^Ɲ3Bݫzwn,&LWe=Eψ=ass{{=?'h<{שs:weqigìj{~8X*BՓ+i dx9[Z񂨞xrrF8rjd#=5^'uלCvvig aB~g!ho zXl^9xxBnB6o6!/N|rjwJr^3U򚃩R Ωs^$d5/ހO/]%;8iޗݠCȯnc FF1VQ|d5/\SNr}e;z @dr6-Yh!j0trsyN1VQ, ݥu;n i9M7څ ~%)r8>Hk-y閗RZz*Yx5 3/YePk敹]0IcACke^}4>qWA,̫vdaVf!CL.%;zpŦqtJR Ý]l&(흣_fstzpcpT(l.Zn41^8 pﴏ4:`y cC# Z UgW29V[O+ZFPmvuE%֣a10ljFyWTt2@ mU2RY$U^%xveBž˳%$O%8Flʂ="(/?ϹzwGpAeڬ{UAVM-Ijj8:_򃦓;g//B-ABlÍEf2OӄZJ#T&nA $@]fCϱdIIOfb.i?=Mdfw~H ѨbЫ4h"|v9)ww2;}4&\` & l )g8^};A*jdQX40njwbB!h"7vSχ ˔6p0*"u2C,%BG͙V>9_ڠA}/:b]z2޿-~z fs= ޵5mk뿢C{LgvLi257~HJ%"ŋغ@ ZA;)b,c #0,U8ĒHv7VS$~(zkzGA_ͳӒF6F{O|eǜ? l2!*(-XK  M&H*+RId3ÞÞÞ^^A檁Fj> ܈TTXO #C=9!&2a[Ap,1( aJRIE V̄%T7FhypH6ގfjt#yҏ&4Jf)J#  ..g}18'#!O8sg f 3r L5`ӌ׷LEGнiˉz-uE;g@{F&vFxb.Fw-v}l>\@MPHdU ij(*~ўE{(I7]ҝ[b4XQ9k|P?yy~qkcۋJ(ak)΋;U`dZk)!+gi5vMƟnf{a.q*M4/a9Ǯkq={;&NwոgQ%p=&h7nam'J6N#xȃ.ʷ:%_8@'G GC J\.r^`8q^y9[ JRhSVTP=Љv [0e|uVKT϶ZVQ%,{5K&xzT\ j1L؏g1 &;r{Jza Ri g~p?@TR5g9moKf5r#;si?9V`.jYR,ne A]J,j~Z%:gmؚaCf~~B(}zb1:(J[ωj`-K:'p1Y8fJW;vFHix嫠hPXbs|H~1O1rq5,T4~o'[G"u>n j`ȴ* aWpTNZA"ZI˗uqS\|YPLrq"u. 7;{55g L'"-EfБL&fquyC0aoMŝt@;{N{&7v8yS (I*IUQ\JE; ѤIrJZ =K뱔߶ci Z+ɱ*K[itP൸4qӝD0)ٸai 袼IJ.VXi U*utBmi;AtFgR 0#jGM)H7JQ,PTۛ,4b4ϳD9+`iAR=\ k~DnA[I{f `'B^왽'hm{3܂/O۶g,6e6vWQ^kd3^C~Np>_K4A@;s1H4uᣳ̸MF7NNAĶ_3w!u Ò0^d(:C 9h<ͽObAqڍܾ<%q= Y>&wz*dF.CԚg(۔Iʪ-CͫO Fw*GA*VA wGOLXz,?,&^ cqϡEs1W_98xS*sUK\|ŇW%кFrs-xRHzfαS,BLK퉄8Bʘ~HÔWP(!׊3ـ0#+AX{l)7/~xi(֦ɣ6|߷,D#Q<0DU@Gs2 PbpdQ~sЏ`5IEX 7o!HuZ!qP)6uՅOtsa`Da>)P%CS_1P&hxfɊ GQ9)KuQ+x$?O_D$ll{nzxt `/n7jH.9a-Itw%*%BY1&#VD&PD;{l ʻ W Pw& KqZ?&A;>-9xs;\lDž`4̩]gU-h;~TalK閎^P!&b<+) SC m҂Js bC>S>Te Ń gllZ~KQ,Y`1%<|};|+~2AN&/&MVbh{/`=53%q>ghN]ri/5I<(8|8x*? n'ޛ ч'0766R ?3읻gp,SoL}KB&q|{P@=4'oOO'?w|O o?\'og errz4/>w|~x>Of-4鿗|{$trkӗ?B9IK?;ǟv|{ُ._~}zW+u|?|c]d?.Iӟ.N|Wƫ LƵwy7T%ȽfvmhH^)Iaqc(<Yhl>&j9i{1&IܿHVZ`ev-̾@ٛ?p.Cy9B?(|O,77A>a[lY??h,q|8Dw~xܹ].7t. Aɓ^A1P;.snuѠܟ2Qml]Til336~ПA4E%WhI:G__/N8m4~Hy/0O2}Ԧ t0}?<}ҟ~ZB}|t\V%9R>Aet,y+@?,yA4%8*0WHt,Ae7@6@JHm ׁO`[ _IJVp,-KM>s4 F)aar>#ƵEݾV&nGN5im(g8oL=PIIn(y 2Z0 ڟ,2MiVl/vOc'I3&ߗ"^`qE؅D;mک;#ѱ69ҏ&y^(g,hIZWVnB*wRNHG5֢Ki*k`SxςհyЮ{{1A 9v&Q H.(l'AwїZB?Lڼ3:2 M۔lym~$:n[S겘LuڜmNMHMؾCao?8ք~67 d^S'w< )sC)j<QIUfnۜ['?J˼Ԓh{4D$PM}a(A%YҭRKֆZG4C)-Y2P^F/@JyR6qr2"nu&ӿj%B^4-Kza|$c!)JcX"9{Eϔ򺑝SeQ[j+ }.5HpL=!X$"gX{"nwm$)Bn(upx^n@^w|LM{6=]:}S 0݇32lHd1}lS.MHD.~]]a4˙VѝHD=u=?%\t߿JJ]k c)=m#g׾y,=-W"*-k.[i 2ѹ .Ϸn2?)OGvVW{5ҏjW3o%3yWpkRI)\wE^E.͈#sk}oo["֠썓)y GPy%?fSe-°h8qf%ͻL{w7ײ5sBP֒TrNr._U 8XQ`k]=sz=:{t7jt[ U w|{5"+3]6{zLdcsMeWj+4G\}?~eTiZ"M=EH Rw< eִIHc 5 A(23BE#ϣ2W .CU%&Ӻm_핝󖁙Uc5JN9M{g+4äa.id8-ե Kj ]p;e[_$>!N+NZM4 , ;.AH6RՕKQȤj[njJD$b:5@ f@. ǘu:ae PQ@(dQbL 39*xL޸Jz+y)_dܖ⇠{<.lz2n"=V= ++9m= Xwp)FSu&HbŚ1)kkA7|$>VPK15zw1_FTMH醃$lvz ڧRwnRh*ˣrsi\țl=^ʄK#{wX41H]s: a,d'˫(݌4x"q1u(j 8VS&CXgL1iypdK02ޝ2Jݡfo,'b{}:7kupFfD@rg;th5gu7)Mj0 v8(?eQ {g }8YI# O̊ы1fc,p'eܩ}Tnzۄ;.$6"U 7UwLnB2a{┚5.d6s5@+s/Wiꐞ-nA&>+C%wN) J(nI&ŭcfn?+,DH5~$w^n <Ó_>11kučw1>vnDW%N\$fk螝&m+r$0֫3cǟ>n_n<G2=yeN95BEU dzkMa-':T=La->fۻ֡))@j6v|ڝjI4c?ʭf!)50t垼=G/؞A=qO !jP:l?`#_ҩ4'Ͷ<Ӳf0 JZzLIAbθm2,[3 $RXHii$ d-1ZdZ Ƶ"ܚ1\P]cQcA, 7ﺸpv،Vxw'3YtVd yiI< Drfz͝7q7)Oxô CAz')5 GehH lg h5oׇ0׀/nVLe1ZͼU)1}w]2OhX-wm|M#;[tfȾZݚ/XhZ[WjtRЃ,=Rƪ,k=sC/MI:[f59|U*ɼ2KLuR(2+$NK)nZ'-A 6o RPƚfĪ2RԾʢ*${ ԝRcAZ[y:%B4UMeyxtA Ԙ?9;U=~KsP[6V%XL^թa I.Bvi\eq#Rhe5]2#:iT{im cKP-ukKB\c:i_:[MsۊLX= >n9Q^mt5f.Z³ Pe )m&)nu? b&r3Us~.]//j&I$׌b [S >5) | h>zt8'r?8='aO50s0Cosy WV`VXEV8-YO?}o.ⶼ '?~8y?O~v< oO\I2 -v#gRG}D;/-#V㓿G +Hg>a -`j07m2x &^0߮тW-a}zϼ>\wC>*Ebo^}j26IGTjⳞ " #,:݋Y$Q%5"5p*2 &S:ٗ V&W8ǰp䇣'bPuS3 b㼥]Jߦ1ӋP~ke+l~iOa4v)*}[r,U&ouR1BoN!G*SA1ER {+)ZM6RL09d}ڽ|ْv ADFg,YuENv`#?}]Oƴ&hcF e H;Ázv[ IJȃ/rNfWFn^XMg`c zȨ~0#ZjA+6V1ɐhVy;m'*Qnֱ [J/dssRQGý7%ă'\F|-z*FD}іKi4{bP^89*F9 0JAVCHUURZFڈB!|L 39Mct j&roP9B6P\cLy=XK0pmĞ08*֤"j:-ںz p iER ה@DzCS)ְ@l$D ᪮uz]^8{0߲|r-ͷh Wੀ}sa=9TDD>Džd24h0N&菣xor犾;YmvTκ}9y|<笘S̺sGcT \3Șu9D6h(׀ Wp3u:qO~ .y< ^vde#.\\"'~)@]?v_'JSU8l(H&-O?E=t3ia8S08\y0?%m-i_>CU PRw)m]e:W5ˠ"A @"әrg;th0@/;"=kD2!a"؄*i8lZ a墰rVg-"[/%w,gr4Ob47{V3[9&5"^F+G ZBb;bD 1˅\fqQcg.N–WA Vg>P "7xC!((?{4`|Jh)H[B lg940r`?gPQgG'w N]8e>x\ -RP*0sѵR+p@O:RH cB.B 31E_<HcO҂!2 ꉉTL1(""đϟS-s9 ,(`ߐ{["\Ȁa\[:mY@*`D:1It&i11%3AQ#08>Jώ6̊q.k>Em* y8j3`MZר͔ Z˥ɴ[#jsJ,qs̃s2nJ}[ (3>[b[s+C,>R[ L _b@.HaɅTDkA(T*Jn-S^,|26kK3iA[,n֙nd s |9X* /(P>4fU nݺn[po ;bG'Ga4߅`$D  2Ju gs3#03}Ea4WzyebE[%j4MM47Z5_i.*@RrQALQ FVB0׻B)Y+l4`K%E얗Y)Ǵ2E2"tz'2^@""LG]Fd\k1PEZϫW$.20x֒&;/7.MO/ĐW\`ޮ"5 ˓X AakOPx|VO`e:x+$tenuq"5RPaELsP( K H4 EGfbᢝ¬~2)Q(e~9j!g{7>}]~@)\!o5ev7~5Ί|Oa)cJx(iOxTZi/OONO vf4 |K0q!" e-oӜutz'? ӿqrO&~w~6G (Ƈ)/zw~I˗g//߽}wu8)y:FODN8k痯zs˽?/_u?.|(UUmg`&s~{ى  ϻ~ƍwt^>h >vN)N@뼫$ٰ| OYlRx왏Oɲro-ΰ| ֜=zٱQ%wM \O7h Nž[c Fq2TQr u܃o:Ψݷ3s_~}}Ͱs\{/;ϗOwtA<ӛ$!˙X|ͿoB2$;i߅%cw:U"Uɻ!9E[ȝ|H2aǘ>rYh0|O*@"`v 0%dc0Պ:3l(MSKk4`c vMv `vlۡllMr9r sVXC`xי9`%rvQro}0h͓1hz!` E ebLz_;3@4F E>|#OSʈhΞYOEaTR\*64̴C6Nc0P:i{ j[ME(bm[$z9 a0͕Ӷ0q.|/(Vr`ҕ:{3n%9댭ׅ"cuĩbs]'(uUK꒮ (+[,]z1f|2_`dˎ@f;d:Շ ע׉Zc R߷NI[g[(C ̝?#mu> FI3֡X:͹ev['<۞J(V/-X_h}-°u2AXH8UW$gHhηL-ReBYzX(#3v͓JAd <}d8g+تR0Q9U cZܚyZ&H5#p_ XFXC1R8F+CIPYJeiF99 k‰2TLzQ5x@`v^\&h)!He)6!&W*Q.,{ =|VGu໘Od둆lp,Li#0'*gζL>YQdoXjԺnfY_;:fUw{. qX LI9iPg2mF-)IuFh 2ZHpkLf2 dO9{3'#,XF{AKrA(Uo*Sl5Lb`]j&;$@]nMlo?p67d"q"ty ?F^{~+4pB@g=,BxE*),KcI2,f2tDelAjf=OE}["oƒa{G F(9%%;JHEVYy,o(2]|Rjj!ԊpEz.=_pЦVSCનqPB =1+]J#hRT h5wemI0qCuЃF؎ ),i^Emd~pqW7.l)ʪD5es1wr0Z:);%{ς)0{ffF4kZ/%|Ɉj5nxb -dp_Ӗ;=zI6O}fjW,5hf)yltgkL$i/˂2MQI_lkVܾ_i-!Cf['sW2LTۙw1!ql/ycxt0 yΚQόO5Wn|[:ȕWFƇ#+dj(͛y |KYFidQg v,K|+6'gp+\OXunMVz͒+-ֈrFyԈ" ѦD-."QyEg/\"HQ,2](q^Knj+Ue\rnI*<똶YhR ~_PD>. SiT ]оQ >o3)Vx6J~,>A%[ODBVSY"Q^9՝IȒ8geƑsdAyi,hj޳W!,爴|Cvݝ0^@d0*8{us''LG-;>oRiڮ9t]qݡ*:pmwڊ|a˧ >e||Ҥ*WL3Ӂ5y֝.<98YT,u*&Uwq5q;fh*G9q>;8G)H"1"2baM,Nhpg;m;hNNQgU DQ[l>V'o!O!찵 Ƨ s)ϕsBb}/\1] >3Kc)mhշ~Qm.qx?kq_)Qe)fRH]F&9NOt̩£wm.!{B;iGJG~tU=*ؤL: ql.2`~k,?YWoߥO?՜֟7)i00_%ˏ1{&o?~~0}-rV[ju|E"hȐYRi\Ht2^i'ҔKpn>zjR," iE ה@Dzx :n&Fqs,rf5Ç / HLb 3h% Tw2} rb OY(XqI_0EaA0$fH9q`-@08V LB s,G`9,sPJ㦞%p`1|"XU 8^XI#uNx.8J;-5XE%l;&2Z+Q2:"=8Uzr<|IJ[Holrͷw_YtwnBoO=̘b$oyk405jUx w(.=o,poFɜ ogkmО&j_ݾƔTVW3fx3k.yGA\oO螄) 뫻9< Vז(c}"! ,d7@wki/pJIAoaS`*n6|KdFƟcYUsxak#M{JQB2[xOBSnl"n3P$FmFD3|ium4ٮ߀%H'l~%@MJ0AK- 1\"۪0ۓ*a*Hsp T0%nF F/q ]Mil84KvcrJ |iuyÐfe W8ܒ*mFU !F2^`n lU" m*Aƌ3Ä˥xO &<1)?U}x&SE%6pSTQ誢% 5KPQ`dӧO@ iL.`]ScqOt$Sx@ORْLL*~jYN u."y RD;Cb.a$cu%\9ri7=L"qq>P}c)! Z10d"`Z QGn9@W ,EU:1"+aGꙔZp L^o1fr5``ر譋 ,-`E=ȃ3}:Rz>]Ym0~q0b_!CĴ٣őG;bn *Zb,tt#J0˵bV BR3-F4x&DnR"tP}M9cb5sr,b[닺J9 *ݾQWҬk͢;kvVqm?? k;quK [Ϭ_ټ{lY7f֟PfJJSw:X߬Ќw攷ƭKY_XN{;fO*S-t93[:Iڎn&Ox;a{}Z7Ѫ3DI(%u4|ǐ$"xM&D 6(ezÎX}*c(-=1}oRgx7->Q1~tל9GUʵVDolscu7f0 }a8dx/2ݻ_}l~Gxsa< p쾒ccӘ򘘷F7wű_B|y~04v h OrznF_w}qWk.@*SU &%YI9C~85SV~덠B)FK5-C9 h _Fy"y"N~[^\ c0縴jaH^'&Byûd<4L. )t&ePζp)NOofxCS4h>Z]ubkd .lJ,.XL!ҫLOhydKZoN+n4n V\GG\0QƋ` 㦥vwaWLc/`ouǗuo*3ŝx ҲˁN~䈧LBrc~%/,@ SD4'tw 9moeC;3BYv bgdS\1B,xǓgT9ڹuF'{_Tre*>D />ۆzOf&kX5+&p*Jl'ǔfW[A>y`͚'F!mP4J\-)Bk{l%I~+I~vFNx: 9mlEdM:ҋƍ\To䈉EJ!5V5U]<-b-qVه;a%/>S;?I>WAI]\#%/*|>\КAtqrd/Z[\<`kؗyLE> %0K~~{de⭄?,e_V`R4]Yoɑ+ ,FQygbƲ֋xl<Q}#}fu]5݂F#/32"2"r FHڣ9mwɔ;h`Q3GdBX>ݟn?0giCk' ݩSy +IoR[ bJomr͖f4du]EZ$ ;Kr|JcxoэU`~SeZc--֟*c^T?Ʀfk;{ϝ͊.٬.ڛ5bK]47[np֖9!F5ϞWڍ|>8S/!C8ߘ/Et^GJJ[~v4|fx-Y{"[4^63H͒S3V͒o SEns7Lj cExoxƨ!0_^#ްک/ ק[PNI _iQ! u~à;!YG޳];:(b"u{LFeIOFvC[vvC[t&vm.+vŪz[Ł'p D-֩ݧVK,tod}5ͪ j 5-t4: [2MA$JACjtqRZkr,M3E>hC{]6?-ʹ$thP&=|JJ͂/7cMně׈Fju"@Z.'5~Fi_"YRTZJ^ Xe<jUT^\뵽'la6DÝ!]kM71Yvn"RvO Gdy)~>G) M`8Pktə /Sf6ܸnBt3h&dCǼ_]LlRh1\Bl B[Z ]?M(Yw^jr&ȩ-Kbr @S`oUtR!q)4MB})~V/gz O9Bt[IBLx_s&@¡'NہWFN;6tyd[ҙiy-7s,2NM !/7xOf.?4 ~~NA|r!&a'lhL9sK]2PwXzzzH^1B{iJrEx,i7#$k 4~gpMxsrsYQp♳_g{as~k7R$B̧_Z4 k5(-/$~XRBKiAj.#6%sǵ` ~lΣJ N6AUMD4ykv7FlߒrгA*=zг\k@EaISgpx |.1IG\ȕu9/Fm r?-#̈B/'2\ғzI=oNkp*)QTć.\&K heP!jo3KQKҹ_x>>O:o3ǵ1rdP3+-m^IAVrWe=dn,RWex=#`;bG0t0/ </<,_8_16 ;ͤi09dt^;H ¢E` "6CruV` `޹7j{j9%iftYT$L^*I/C&p0% Gi(^ g(0-Ep=>58ʂj38U9e.|f @N>}E<_= Oh _{!CA,_>>3i+ ~q⳿?G|/g /S|fwg4x򋭻rsr?vnp2d=i|_|xါv&)hQݭc N < 91ZM.4VYJĝ0@2Y8 V D2>1em h$R:T8CK^l^f#2u#8F'Z9EWV &g0j='9y\_-u3\&2N(NjVCm 6hoAe5>YZ Oh8ieYoT/) u2O=' **!\P_d<.Ξo,_YLJ/ݲ߿{G9Z - .ޅw۫ͻx}ggangl&t _ \s|GDsJT%`r~7"H(nH+ǢhR BI׀95<趑L˘E÷MLdaXcQ*@k[YAgbZ)?ډq96+jыo4h&=$$/g/So{ Kb$#ʢsQk. 2ڪ~_L9@G}F2jC6POwɷQRwV٧sw_67o̢.%͵oLi,:\ӷts<ϑ$(4al%2JU Zcɣf:3(5ddwQT~)'C 30o5S,?a$J#}x2{1?7>5CM衻[vLicBRؾmkF4Y8%Db喋.LLȀdDL,y$t3<5DA%܂U[- j#r.B1y~O 2( 3[FsP(0BFB{+ub 4%%Y!rbMUZNy2[iR:XRQe.7Eҡ3|3nq Diŵ2uyӏfĆ.k*̷#byLr|x:9 0 ):A[rˉ6pJd'vA܎/܎fh:ѳ5!FF'p2@@8 s52''´I&*vA@ޮew?W3h* =_$.7;T 3|̼q K;<zilyuݻPO.TRGb2 l}H*R6^;)HoD筬$Bgz_؃eyffyسAf}8l^leeIG -JպYlu+Xn5ɯȪb]Y2ͨ㖗ž8Iy(dT핼09;R@ `jJs̷P^ #Zu8nz#Y$j !f/EKnOxzuσXB+ǃQkgy38|li`WT0GI}_ hN{םyumo|f *Wuo{bMQjjuwk 3r',@Y=akS :ULt7-] (+{ѿ̺n?|ք >YwY?Zڡz%N>S( %yPL3s)e(~=4;K%p"28 _ ._-v*7?|`y:ٹ=vmhmlGB$Dx4MNLI_X;y tkYWx;3cK"=6LYZwDyRޏ#;l8;ww8[0 XE12 څmK}Puŗ>#~2mC5M̺.ڇ=cf~/s%x6f`*[s /=Nn@yTMY OBG[֍ݍ*݊ݭ*)Su2 -xYVy[Y`\r**$|u٘"¡xe8dlej#Ľ4*xӔZjBW iikNe "Jcsdw`nx3g k^rA ֎ZliG`lf1t27"M<Fxt7X tUIq#:X=4AxP)F1Úh$֠i%2TeʭKΘtRHB(+mK 0|1){"F<$g[b\ݺ^\Q΁E`ќ έ`* [?M|лB@pPr3{ ^lz$EVA#D)Q ߔP|C 96wIlTq{x"tK& Ɏj>.@pWcC>_S- wzv~el_~Ǚ~f@ڦL]A~Owm8]B۹hv=eәTZFg)1?ݭ\!!(U=f)2jBYkfea兤S&{FPKTOwWzAVH.4ȓѪۿOOIz\",9)C `", vt^ *)&-,PsMB[r&`,@; (/T(XY0ktӋ4tpj|$Q.K 7-^&*"6Cgi:PkMJA"X}-9f!$25 Uëu#P 4Q(BA˒$ ~/z~GUX`72S!zu=*2ݑ4Dp+R T2ld)$A98sV!BNj EUk m߿f1@0QD!JZ >ƫ)'B n8ˍb&(˕4Ԋ<.a5;P$K}9,6jT`ͩ0E qؚX1j;kkW\9#`L('Q#\7VNb]BȊ9_)/Y7Fk0SRR Vd~ ?OcՎfLF/Q c 8~"K-'7uM 8K,Rw2{r^u>gdһSa QdD9hoЛV#D!0BcޛTc{ `Sq~1 Ď(M ) e-N%u"z qk-5EpH j,1[pKdB4+9! ڕsJFV-A=|%LQ }o>$*@}!Vٚ{S_yE5yg_,xWe`U7:M9A5:c3n3h0e{o 1l5ato$@Ҁ+؜YyY4Α2WcJHY! eYiZK7LVIf`YRo8ǣ^x;G/?~\oF+q"IyD"mf9&Qm6'-%%$nZwҺ\+γQ(NF'c8ϟP*Rˬم' Ѩ+AƮyib$v|%0PLf֕樑@a}uO#8i$RY2xC\4B2 xƷ M_?ojGj+9VT ti$<;fI(v$R%ΕN6Bzm#<[X\.4As>\v2E_X*ͱzKJLLjww ڎnqt{+ޘ0Ԥh+K4q_(QD#%[ J(WB.h͂R, OaP!R*lsѫhy8ZjP9W 橛6쪽(ԐXo,[K忚A$9"#T@r2cL#ncu,v 6foNRHzwHF[*,;Ώ\s{&*  G!U+$7A)\)1ƹ)n]["9lOۺ\e*CJUߪ`nX#;Q犟.[Br*qkB߬JDoO3ٚ(7{rn⢉&C0*Fɬ-?V<:nq;p82ӻW7f|7oMncxKoƂ=;' AoriGgR1nts;L.4_DJ HJn(cN(H b{?JQ, ZPή: ź]+t_`|7p~04M}W?|zwW:g>>wp( *\cT.18L a3Lbsm\fQo3ц(i=+(DV8ˍb&(Xؑ!$@sv8JpHa͆)bXָ& }\S \E5^C:wxƅF˖ koןOyw-ǧHw3gzd!'/fL} ϵ;vg7ڝb7-Yh&oda깷92B37PAT\w\Y ;Χx8g/%@:ä0Jtt6fgz >w<` JO% `H11[L. 9(0kT8 N{ b;*H D9}yKUDՂ٘ʤT1@T1@[ P-1A^pP ׎N@% k !, _e-B -Ad6:P%6:l ҡ1;UXw:Si&>~XvDz==+" `3HS06:QCUӇ*VC f X{ \9t'{n/`s[Z};a;a/0\E_V6y/Wy!_|Jyh7ޚM[59+æiU/%j_#Y/{rvK̀7X,lItŘ",߷!Ksa88.e7!P"YuBX*@^sRht^c@pU6T|ܩ Z\ fPԤqS(%J ׄ)]>5Kx-{;sLU^` }[5(+ҠKʼ"j%a lb&jC鞰t*ưx'0}3bч+|U{^ L 6LOlvkcн-7dhA;mC([w_{+Hœ/͑HC]KybY]v߽s4Z8އݧh-3xK<'KqS5Vn==8> T};S'nL}o_>9^po#d%ӵA~tػ_DZBH \uZ(q|<@lt~ϮG/ަAhĈ Fd1&V-wSVjL1 ;NMbʲO7db#*r0xHkamhhzQ)/w®oOApv)l:1QƤ^smhe~no<{y ;nHo㎛$ڄ?U޾y?=s]iWZQAm2+8!nYp4H T8Ռ7Oha) !%ǥE %SKhJ љSr3 ge5㓽uU$҆7i^xo#dȹNyJu;COU.61D5sY o9wv-^b~Uþι;4M95q7S+X,]L&A)KhF q-hSPB.Yu*`( K- ]~t7{sL}}SǙL&&Cyeycy' ajHX̘&3y,5d"NH&Bf\=2=(ccx̼!~/'v);2BHb)΄$&cXmY yf&\ cXAF ҳ+39Uf۞Y-s=$Kisv^SiV.MV)ydIJ wDB@:EqŚ K!hbqCJf{`+Y+q{/LQ ME/ay<͚ixϘ%D Oun2HԸTe!YY.!5-ep+`A=#ژ-L3t)tG6 lO<]d^k ' (sIXOQp9u^HQyWo 2"AضE菻w?߱hk -9-1[l<۷un&p (y4<$P6'OVZ=uV?x.Sz Ns]ݸEr|c͇?Y/NF7hjy 9nb:~[,ōBqh 0Xb(QhW '9 5'gAAَƳAg!!Q)0u T5 *YV[/fzyQ?N3PU 'O,9] '*Ѕv[vj3QUY0P)43)oh)\K_^ Qj6CŲ+ɧ>=5|g/ACpTI?޽}_x`:ś7 a2"4*vmkӛ~/L+p 4zaJ+B8źFХK3ޛ3or_8c8w'E^ZJ(ʢ P*]b"n6ڻ<-ń1N%^rs$t.ECNnJ,9l=7*ZTӴO˽6ƱU ĩ̝5ڝfRW&yz lTM`J)9.͙5 NV!TWxZAF^H^ 2#\.!dg\ѯ%4re3V To$q(g{6cGAU gن6ӊs?3.L[C^#SdY2[(ơ曈OLPCVqNb"TA 6 TKA?3 \MvذeV1lA;vZ iBd¦} 3(kGE^<6l4AQz7ojZ oo/W禪tDȉ7\Q&NO"0E"' io/X'!x6:h9ar{"1FPTGgnဆ,QcLᄚ@u{sBizW8+bH Ex0ğ_F׿!tK s_#2FW:zCYN7]ڹR@ڌICu^FJv:FE1gɋZ8yKv 6a>We(5U~A +U'KU hQ}Bi^7˻$xLڲ:Ch|nA]F^Vs48q'UR$*\ n qwvpV[;m;oE͗7—@!ѾM B)NYzNDđ "`FD4r S$K@d>7s`A|޻]?oXșyo q`6SjD^h/^<̂0kF}3B=h]fްaRPqx-*0e䴩N? -bl @-εaٿ( <<zV,ڗv5x^j`d{mMbʊ )_ZP5^iV'zYf-+BxG_ #-j @un-hkMI!tI*}ŘDaoT*-6BdVDB R4 MIAO6JYe Q줍bvSVTNAJ4U@R[ŤSv L`Ӗ1Z(!UЅ|W61CRri^ m+9>@`zSЌT:| .KH>; 3oX} FN~kep0\K9䧅 ѧG/5r$N]Ni JKAEӑ1 1eM !Id(F'2.O#KX K'hhbH,Ii,9NQ-@'.LXA K"/a>KJ֐fHWDFiRYwYP-@!)Sw;ė(-(`"HV#|g7ION`rE[xbřg-olvk#MsRx ` 6Էb̮tshcbj\a\slHk+OXejtLMr;;_" e: Lh[\١EG6E9GCT&OZ]jUuUn] 7rHho ueE-M><~f4x=770+4kHFiלKsM(P/<0bTnvHk85U%HpL%h԰b $&K0 %ޫ L4c`f' ~BSJ>|\epAD('E)*@!Ԧ |`U45oxh4 ןF#؋9[x2L>E<1t ZP4-Nuv^ø樳 q? o5 SRE1Zi&  7ߤT荑``7o5{V?^M(IMS,JTF'T lo\7}\8{y뾍J04KX~RgY,QՁI 2H,B6|g=<|-@<8bغKRMuByA\SR#h/t8)('3f$M\ʈ@gR\p,ɘq&SP ?+ ]!,v" -7~t7{Hu˦ ZʢXBְc|vШh8@D ѺK9a;& L KQXSc!1n6M6I&F S ed8+q8jb} 0ֶI/ck61#iE6YXU<- f1+Ȉ8[v:z)SУ$XsqJfV[41v9^XBkZ ~B&G{ry{W U^uցE^H#ϙ aĬW)^#,l%n \#3s?p1$VNd2=&s :O[l$b*a]XO7 S$!Q5L8 ȓ6A\;*j$p7\kEvtQ5 {?x3~7_'35(J5)D ӒQ&1a6yParsuFB8?ϻ"-z4 Gd9L,lG aV{}g}7? )~Q'm6ણ߃ "<+kbҠM{%RN h˜]}Ŗt[ պtA }Gv;(&8lҭBS[΢]~Wx ϯxsPv6;Rkh"ǤtKFXLD"gr #R]A{KЉ0Nie1 RhZ;Em8ZjEZa”2/"`7%LsN3WLP#ҧkLΌ]#*3*Q;WzŘ&LEm1aZ*Vpn2oeGf,f.6wY.6u0#*4sKi{H{4XrJ*c![nQ*K4<0 0"J$:T+(㮣ń,&f1!4+'U7Z\P,n>21y̩UrrUy%78 ;)a15g2`ٮtE.&U$J6~`"/aY bXF2rXFRzN@:Dp5}!=?. Ct 0,_P̖nRYbu :)x;G? Ov^CN`p>R!ߒ\ZI2CɩBA)j%4n8A@s ,ɥud@Aޚ!2Kœ ^V\x!oupSf&ͩ`hcZ6%%3JSM X vsI9@[8׸4#DR*$ C@TSd.bBbΨU-0kYg=&ȷux!,%fΨc4٬tFҗ3ŒuF )@u paͫ|A5pJwOAoIw"obPphX= ^'V½}2A?[.I^ŻO_<0T9)\EIvtso:lʗKiognV y~}lIXy5C1 $73TbS* !Fn\#JǤC\.su%+Vp!\`"(m,Jz8X|M+@h㛺 (V+0ߜyppLs0{cz8M`YoYUU|SN$ѷn<t bV^gZ6Iw$UHe,=I(`ln8@}ThGNJ&m X}nN}:X´S%NUO|>fs:-pmG1Z;lT+cZ94crhOcQBМY5CN'N/ua9[AUoAkR&Ѐ=}A{p0PlJňηk.FTH $d~qxQP:{wŋL\\>W.ᐉp`D1rKnc E6<) IƘ$TŐa>7kI,z-yHNI)(hOig$Ue\?iL&'?^dۤ_aZysFTK _1!ˉ^y}02/Vz%"HWrbڮ|8;yV; ]T_:o1@q,TIB,V6ʮѿ QuN=)med9z߬:=S(f[nkp!'tNpATI5=y-p.c)%EóZC4=Xd]V93LR7UQ&N1̜t#v4]A=0T&L`Z5( FX(|x 0XF.nܝ31> AMٿG JBqc,U%w *$IDkl5*V7Xx"|BM -RZb Վ9 a4!Ec*V;Uezr*4@W~DZ"](j.mmWԏ*?WW RvPU*llu7(rJ>ϻc_e~7+W%GB6CW=oW(1i_ud|z{CJ#Tz[ NʋXjW!@KNS9E(,orgZʝ)#ܙxznWOK-8Ga&HC`*ȠWDX!&2s&D&FQ张[aSOyh:Mg)yRfMs13^Uï;7yt~8;IxϏR|6#|ӏ'_uFb&/yL a:o1mzU1W9C#R`Mj> }\`K--Ǫg2H&4~@QEHVtfbMt^ilgFZF?D5 ]=܉r~G~qſٕt>xMv[|pW$'ڔ}{NG8# LY)ǿ*yo 3qJOh\`@u.E.ml}G193,sN'yhO0AQ4g6_`̶NYM L.]|9&̋AAvqvFk8Xbco!g9{0 G( bD eJUT>Ӣ t鮻HoEbZ;(w ts8*guhjrj1j?扭YG0FT>YkU,(Ų9 W{QE 9r2c6шLw kONTq.h\]})Sl!4ड\0~6* jZKI3H\NԴ\t?5<n fYU? }aRvF?Ч&o:[\ĕ&@$_􉇨O8936pp5I* bV2# yj[3Lkm9.Ps-CrsJEhdz͒ަRjvnaN%p9!mүZkm_{_{/5E|c6=z[n9͞~/Wp+v86z\a.d5^ ŝc ؖ4N!:WGj+D[{jJw0m%eri[4!0kqL4wdB\%v:Y؏/-~X(+*_3~공6_5¨{qՐA׳3PړPSS&R]y͘J~FKÜZz\ʼEcE4YQ^}OrUf]@KóӨxctG9\5x86L&n>Mh{i鳓t4uJm{'hS{ ICD/Q BPDmẃ.[n;pnRNıVJRو;|}k/~,"8SiQ%jkh&}T^YVA༁Ʊ翟h&Wq/o4?s3~s`q‹rI|'vۣeNjͮ852 6 6' m>ز%ҺyY'bIJxU@#&nqQ*8n<]c$QObƨ  ` E%)%+FE <  F%ZJ T̳hG$#ϙAIfdUaSL^:&|$m tё"m](=]vjMDH $H;dAbŒ-B֘5$4jqSMBQͺiVZ=Uǹ㚇*R'(!5CS4FkKbOy;iQy29tZC֦Xz'.>pIm3Nx}gFnc#WbiWYiKv뻐҂Cs ^!_@ C"\,mnmC@VlljނRg0%o$FF$2$d}r1. WjS7wo}7۬~YHrZT\J`+4Yc@п6q /8}>M '?# pgr#2d3a(JCT![j~U*srl+N[]b!ֱ˧ixͩ],3%t6 ;vRJ&kt/m9%7K v\X̔ijMo.$p;a֪).L0͒D1_Cv&FWʿn9{'j_e}! A~Ii6]l@ClxK!۪\yZXhw\Uݒ4Rwĺ0RͧuUjV DDǯ)'-[$Lv ̜X׼kMccNu#wJ)k ǏX%8%RGR 'Uܢ]aJZaT8Sl|0Ztrjs7N^Q=r6Dfrڈ `%tWZI>1=n}iن82'c[;-îb# ӧ,jK^|x񦁻@OM1uS{M:mk4৐xC/(M:7?yqrڬV)``qi|i\m?S>!"Zhد)rFh)oJt\ZRVRj_YgYw(̽:zy; ѻ,ɣFy[GL(7Ig0K@ DxYN.`sNLdv4ߘ=o>iF61Gl \(P"=8rwv{wsY#/<+=;&X\|uWeju!/`jm;w 1+;EO#A[7rEkBO)7OC)XJ>c 16,@^0)zc73/YVZDZ0ڲl2Ӟl8g`|8v}чdfRo(<{LWqǿ/B]1~Q28~ce'KlWz`Jb766hYAg{\S׮>]0r l9HSclxK"G3Zu&rt\g㗇2ȧKR?2?h =Z:772ٴp S@56;{oTT]\8$X4 5޻~YH[g+#0Jw,|EeodO5vLEjsus8L-OsȽ|頕m: >i-%y']_ˏ hxJ.cZH4(9iN(Ivr kAI΍QQ@9~'&s:chcF*XK ZG RJmIrҼ\f>lH3f`!#Z)ngBD`tM9Ũ@*[m$ϙ1FH\ Ey_df%zhsД0/eH*tke'I%bg}H"!wZѽW)8t99&)V8~%WnnUbQtb.WUU)'=*J5QihWRјz+Ab=5D&uzۈMjؤROg$|2q{l.W&Dыj2p;TvW2;8]ϕA:mTGHm`dh9pc1FZ PA6 [RY>eQW{/><^i+l #=Hlce@,[+V>ȋ)~ƜvVw6S}hw`乞{yˋ`74Sjlq8!nVO`i[LV+1(PV"-S!]\kcluХdRUob՛=Ǫ7~3@jfctGqj:`6Yk{^wB❦4mR I0^ϭ&b*npAqyA;H_5Cmdh ,Gw\.DXRmtܔt"poY b VA󋪋k;?1V_LdMF-hv{(| %9"l.qCnNHh9Kj{H~52̼ RsjVNH6gKfWزskt#̿U0]#L,#|Π~B4 seP 'v]s:+wM~z͉vz938 8%tQYr!UT< iV ;G6[q堌g6H8.E:I`"%A9h6!V g$8(DVpy֎бǒ d:뽈6`;{g o;g_n.~\= W7:TBw_Q3(>@^\tSᑖ(ms`>RVv%fK2 ȧ `P+*`^CVL3/*q,SFk꣇Hn7U@'6KN̡riy| 24%@1-=%ȁT뙖Z-Q\h3>,~YKJ$"| >~Jܑ0 CXEt)k-Xc N_n .FcjVW?{׶HndE { xgƋw1̓dVS7Ki/ߗLԩ;SyQ[z*A2 ƹ G5kQ>L~ L w!fa;H vnq N'was%bDP&ޖ{&΢4SV)^% eYtC٫4ĥ\.{sM E2 5 S$nfjł"03adHFJ u:L^x(͹ꀒ: P091ra|ؾ}WgsYL[z $\okQX w/j,lzЄ+fi=f4W,uVuUqWa⚲GwR1ӷ[WHKcXmUm2{׼1X<޶;ɋkxސڻl:5Qh*~>oO*GhIr0M5ti]0{VspQ).m15#B9v >%g3!YL(r>]znM2^f\ !",P ho6 Hi}gqf) #I ѻ^`LRš_3 T'PHwuY^ˮGGC#'f=uv/Wx.A͊Wwa j+cd}7PԿi D.Et7wh@^oG ~n~O6ĺJ6jx6:AMfuĹ nffsxso(71"<Ei ])Ehi5>n~Lf7+{S|(muv?yY}F~ F&y~^DOW| հ)YL#O^nof˦S()fiQ:Nb/4<0^h{RYͤNoL*8oy)uw+T0}Tu :eDށ1j+b~Ȟa$e/Y1,?f?O^^o7#?l} mcXp2Lg*.ʜ_qڳ:wTUӭt"cDpLTb+Dg;钁~4b7neV(gh22Ɍ Rhfkg{ԃ;hwE{TK d0g;_q#ѝ]M9:=0D=\]s%ח 5KLGJ0}I 2v&x!Oߧ#gx2hJ?uŸ?N{KxkX lNGF`NF 'gb7ZRs0^͆7-|@SuR^5K ]S(444 .7iX>M )`HRB#{bh%kMV1jh4А I0ǮUDeँZ1a9IMp:$neCGdeEEb 4޸lĽUk ݗԢaOݎ- &aeo@BHH&9(ټd-nT0_c~kejI8i1[/=K(=kȰHI@+GAr*QXlc7C^U`~5q`,z9J QlӊSŔ*Ƿ"^5>Ʈ陝N^]59dL֡NuLHil+[ݡ18afw <l| #(ț?IXyǐ!g1gdu6~hX>\9Q6L7[KӶ`i?(]pbȄhsf/Sl PB32zu9zGRh_SLZ896pi[8$y r[Ȏ]$; | B5Z:]P-+&};Y)ejģԢ)-yXQ5}a~&u)ٓl0xށpڊ˰joK K9-KL,<f')sL;yrBzr)^R΋\'s_ۺVQ̆!7i+۽ DjDK񄾆TXntZr+}0@ mU8Njbo3ɾېRMۓ(7tJլӸlöRte=Ry&Y4\pTl{ihYmzȩ -)2'U-& Gtn{^72p YOK@ 68{#v'mUIIivgYヰ G{IR ~!QH7]豨~$l`_G^SPߗI|jQsĉhU(wT:, r4TS>́b˅Xź׹ nhנfRT{ݣcT0Vu xD-B5L,2WW| m5Ж-_W \s:?o:W$yJ)!8K.%T7]`x u\] yv_|,;2@9չ"/?z<<卌>Nһ>LvC}-f- #ĩ[ a2ܺ A -F \XwbNwigE cBZUy;FtDI5J,z{F͈2ރ5\,͙Ռ0KުH<(„Ϭ)DN o -{5_inr{Ys#Ƶ眄ڱ<8;E 2kQ1h Tv|/y:SDQpT/7 Ħ@TW*/K 2M'EY%DlΤ䕒e-_KK6];|&i &?͗ky2&| Y3"~^K V V`η+ubpG;\a/rOǧ@'W=eE9?@%#ﵐF¾|LOI{; hsI4++Lc绣ٹe)si& {+˲ieҤ--z^骋 %Kk`_fO3NOo]lPi8$8,]hġFJؠlyb5Taߏ#lhY&Sp4}mRHhzҡY׵R0nX~Yg5rE fQ0t-ih!EV[FJj?BTPəa[bӧn|@j7=L-H% ƄEu)*FP2K8^]Х2f v7|.i!b ҜC RZw@'v^ᄋ>y+֒|;_qPA.yA=Yf'< =S,J)UV~zkZRQFE+wZ(NieaUFĠH! ԹH~[dK8lHiҕXkwO]0Xl>*ZVNl!jɝb֪v!k0 Ul{[:K걐 ,B\HZNLf\:$Bza]emZ)=;!u3hP %TNϡeo!'5Jh+D<,^r_I { a ?(mڑZy:O#קNH݋W%|$O/-/({qZuF_y VI-k~Ž40uH ]1| !5Zr} ]q`á ~%}”S9mA%[S_X5}%\J3W}m-a(@PI: nZs}aL&iqMWI!diOl_ !Dϓ!QLC]<(FyڄARH˽zz tࡻ:8;Z`!(e-BhUz)~Qf|۴L3GЅ߭HAjAhgdD0Rj W~k&b!`pk2̭ d`ԙ2:Rb+üҔ 1y+; P[]˹Nz)` H) f$MCyv!NTϹ!,.' gK+s46^T06S@UTD}zO-ƒCfG 3daJX_8c9f(Ma`EC^b^,+eo020)hxKHN]XamVZD|nkR&T4a_ZT  ]5o* c5=oM-1Tth]ۄ >;W>ch5攡 _R]6/*F9MpM=3f:1Z2 %/Lftv+bajpXp9,JE%`n"΂\P8,]หQT$(n XfBm q]"8FVCZf3woU+8b#w0 2ҞGY ?O/#>Q\MPh+GA88M, nV;Z\yCzbmXiR@RJ;F@r˚pl!][s *FO]^;m(l Aɳ,cNN) &=)QVGn{={>|/B{7J=(5.QOO-Bͭ}V)&^`bP2n:ʲgS!s(geOgg`7JR*P]SXi􎘅c<PKH~n2MXÝbמ wg?^~M,}\/2;kKsJ^籮&U<;znVuenݷC@ƌtL.ִ2jyQ1lJA(TpNz9Bu#&ڏco@OWu{_}(ݨ }A]ы9-ۢAAu8_QIfk]hvDa,\Jλ30m$܊l9BAQZʲmRU^|8t;R>ƙGCs2 X(5}$T 3>EVZV+ ,Bd%V JiNÄ(r ` d$GŨ(H řHl,$eeM1yՖo<Ǽuwq)AP^CػJD:H$wm2^3lbfme '|mp/]8hK tCvX`TRi9-Q=x~R'*e՗Ŗo7#Uv˸DŦ#^r6 *OR#Z"Ҁi!Q&jz/-~QQV2@LN L <_I.3qٍRbDmE*l!ͷ4#_3NUL` ٢UY$kybTN_hMb=327Rܠirk-z+-ޖ$])r/$͒kMUʷ 5-],m,[~:UlnI%3G2Qqa,[/+bYnUwА@rŒE&# EPX0`[;U0o.i&@&'&Zc/i;˜C.I׉P4\;>OgbdLpqQe=j)W߄Ӱz!RVgK$5VFAe0҃PfxEξv4ҷ`T3y}kF[= .QEX9 ']$d6)F?שJJ$if< apMbh4 qfWYItT玈I3 >i'ˮ0֤"N[R\6;leZǣyACn R ̿rrb%r50Y 9^LGǷ+Bjx0cc1aآ =c.8Ump3HXm %1ZljlXq_~/T 5`x ֞`Tu4]y'J $aXX$D0cܺ[*6p!ZKW5,jTT]qʩo RuR. 'TZr"BZ0pБ %5F7I#A!r>"QYZ 5ޝrU9\vuORW՘њ9"7AbXvZkԙFA*K P95H᪠ra`thqa1],h@Q1!SH!}GE(xT`xtb`}HGD1=g,-36Sղ!VaiQm-|'2,CStXp+ 4ӕ~7ۧ籺Vy_jO;1 5ޓst E/H>~x;Bf!k} _: DžKߜp)g As,A@+_7_vk2|z>R} \ts,}bmnx}3.8 v??Ǿޮ  Acp7P3Q`L"OL84j.cFP- *x5Xf(Bͬz owZ>m>;{}ATÇgD>hmt n-J'n?@AB hD JmP+tMg:x|HB$,+`r,$"d׉~=}7`^N_oKPK Xo_on/K;;@H8@Oznt4Ia?>h7 `WKMŮEV Ƅm '}cl@7 %|rZ:IZܠ䵅-xc@dLN;!h)̸<۪G}#T]/ݗ`f*cFl2ZS%w}KLbkF~+Z X7V80N䣾ˆBLɊBʂNaP|b\.#Hom^PXb2"Jr'A.kJ WPn!*_=3/LNoRB2$l0K$%)BqU0BNm'Cw Aȍ:uEgp"d^Jv{wS ck'$s9G5+][(Sz+{__yG\qC0{j<:K9onMlZY"Ѿm l"Wp?`85 P(&ɫ&_jGԨn>l4O6{>GpM 7 dYQ4⫾4O_4wQ{oX|?]Η"II&*R% +s2^NprI4/Bv\:a>;P*k%T{e!Q1 XքcC\KU"'_ I:t[2lϼg ?e*yKy,xZUuA( g"$<5#7'픳yqp3  9p:8vR`HuvN19-u/DENzs_nQjh7^Rjʪ(&uW=N 9o\z+W@r Gm2|Z8]}UԎ|wCf]/V&vhgt'F#`V~T,|:; iܥCw>:9ۭrPu:G|*̦ m)mDm-(~G(Ⱥ#Fp!;lУ J@)p!jaN; * B k;E%qS.E)$eWMRP"!g"e&-꺫4BtH+NiԇPʴYvd,>ͨͰPx%Rdf GE. .( zEg5!JVJEdrޝvq9fFH›8NO PN-8Ӝ`WƎ©\}4>/}svЙγe؆@ݍ㳤H6/i8<o+Y\~a h$J n))p4:뼊F60MTLM3=P T( 2f/E`bLÛv *$bpCMR{{f9jk>Pu-yE/|{)7Z6L+@h^cЌ`P=jRAD)-8p\#*yh,Vq5c) NDl[nFEy˵=,%F( _pxa,D/f+ ӨzǵP@Jb,&1ҼZ:W]UkE'M)f3{@nCgc( v傮H}p0Ye&'׀]g -xmrPz`e|u։pr2o{={gۋY1ݿua>Pa<+BiZx! !3N:-<2 5Ɩ"l[Hx[Uë`Ra}YI'`,6KHDHauZҌUzjp+j#jM;CUi ;|q#D7-_h_Kr"W횛 Z҄y՛!tjHR25uwejHH8VEWfnc:2o0/+:+dˑP|)/e=쐨[ <V+9;=ey"1^'C:K,h R[#a+Ap/&껳fT 8B'y.(E B]!]*Uo^j $}~գJ[ƫ%!zn`XuH( b5 zDzVI.MXnV_SZR&pkg}=$Vi#tG+"#A{Uڟ%Ye26hN(S{r:%O?'GV (t͜%<ZJcm 'IGG a% }tuV2Y+8dE`:*knȈ${?ԑrh~e4ټhC~] MI;9E X=oC%|VwCg *MRkNzD8s2Lid.ikc9.v!We P*toLr[phUeujQYq[_KX% X㵓шicHljBT ҁvĽQѼIG 5IiPe*_<7#+$k `yEJ@y_LUsMэ>r&)Ҳ%:JS32ޛ֩S[T\}'4F2gZH)%/1+x5׆=g17K3[4Usi \Ş*sڤ12a`PX Mz1 yr[)j)Ƅ<'p Y&Zʾq~vWq-o'onB2MPLek KO4cZK$ʃS\,$jJ 54+ c$ҀGFMDH Rx^DoSg A%-A1La0cLa8:W1PŊ5#*(ʩ(]VSQ?$ n.-ӐJ*dr7)Ltqkxht(;!v;w3:۶Nc#Bgx3>q(?A.}2ۋ]cyY~yеO4F(/SAz!+@]Un_M ?tNa\ϸgJ)3kCpݞf R hG7[0Mhghњۓ̚T{ηS8Vqy?Y:};|rHQC]ۡhgKq3Ovo?YAgiT-÷ILHO /rz/rz徜P>7V €>#H0ub䢥Q1K)W;iza*Ӓ.t֙[P v\LSZB5WMS qV Gu_^=ʝi1;--τGTD8!B /2so tC6~)?[hY?kQB:ltBoz% K`WCc(p4G|YFG]!!$rG8ϒ"S6=⨈!w4hc_rJ}܇Hܔ6smd箍UFV߼w&'.ֶov 㫶*əVJ^E;\gyI'&\'6prC:?t(O?"d㯶Gܠ4 @ؠ !y,0.yrktQ_Dk;F4)&P @*% έR&%gjy{Bo7uE QIie=܁ q8C%)D\q4F.Az*Y/ (,8),MZ:%$D[vD[R&dMgAaZѧfUWrpIs}#%^GdD!fYUӀrɳ&(zPY$*eETZ1In{2gMF{W){sYfΟYQOwݻ)9Qo/?9io ׷] T@򔳿7NƓ9]Mo=⧤Mg`ǓUfC|> LS?vryNPNUwVKTjF%A9+hz)2ɝ'1BMDJrg8zBڣ;8u(!hquD/g5im`/ *[Uw*bZWcl %&P\柿F!-6jql C>[{Q&"E= `H'1 B $t[$\KEe"Pǧ-kC~M4*ppLj9Loĵ5(AV@%A\O{f.Vh4 J4i@2C!)X=qK9E'5BH#c* Z71 P07F3tNDaF"ť,/y:(d6#lqnQmP: T96rh$ޠu$joEW?2Kf8QZ(ܞw0lxr'?܂e Qb‚=mJQ XDb-OKN7AP:^H4ҢR; N-%47&|CIu.}5\3 B |޵5q#˞ e/C*nf7هk2OdIKRv )$g$%(huht|`Ie`UN|„R!)P*S⣰"W ;Daa1:߮ E$A`U!5TT{6 ,( I :W:WBW4f )!x츳2ZfJY-@JBBqoc–@*"f3)YB^1pfh.C]UTD%.(#eATŘBP+p]Jv3$Kh.vȹ/>##;Ny5㝍Fv$Ǜ+J:j#.tSP"Tr-t℣ 2L 'aCW-Z(- &u_zCJAT 6atE`"yM|%QfLgaDZᡔ#Yoe| jMf1d!Xd( 'MңW,=[ʹ2ͣ"aB+R5Kߴw1`gX1`>1` o8~ t~něx`} EO+zttf6?x*)ǘFXp1{  _`q'7[zU虋^)?hsĴ"2AcƘ˟[ƘCc:bI*ӖwA r^s4^B]rӖwA rថcO~NiyG#bTN:]NNh'MtJ -z9nv~: Bn ^3 ݪMmX48iR$#F{YVgBIPp1=\=Q3Gף/f2FZFti&#>G1'G /t gwG'j `Pp>($zLHz;CBBd &J phI=x{u {{v^8sew BX'S@;xNH'R=;/1כ|_xm0Dk G]޳SGqW薃ݐT^v*SL4S/mswN9B;K34i gE |KT(ЫB9T5 |];h́%Vעmf;>-<+ f(˟;}!_׳Xv5W ˙ʞ:[*`M E+ Lbe D鲪& \4)``OUh)jETR3lazVVڻEH<z^]M#|^w{7eC].?|Xڋvv!@C Z}_*AMvK n nh(zn¢ ;c̹v1FI$be9>(r6"Gε9Rui}'kxJK!0{kyPڡNԘѸJi$fv!1%&V9KrH9= !"wM2.a}cMKjW.w.o嫻 WalK/T w x,uô̖YHylJ2pK7,béu41T7/r6wznjTUUNl&~YOOi9] Bl=7ahK_Z3#6nMlcݸEJ{]ʭھz nqX nѹLF8ʨă.a2pb<\>\O4((gH풐 $'aw<[q,hn Ty=K nT%|1pwtn#H`HEn-JPx,^JbO $RVGpB* [|N~OA'UGN~O'(LO~OǗs'+AD Z T7^9,oEluCoADDܗrJ*P0;/'%J`(8/'%JPkѹ/ QN}9/W匴t?v~|?r T+#;d}P49IY=bTxx:-Kϲ4NܶBt c6eeqe7zn'5J9O`!gej`!ԟ;"^uB ,$r!3L`9U%((B*cYLv)ɮQ't.إh STfiF32`d䈗3lQm(+5TXd,+H!4#S(Dz. J8,;HSi`!Y.ދ-wD$Yڻ݇/—si`)1 pkB ($0FrsbA`|ZQ\ xL!<5TX$+jS̏%vB.Җ@YLXM6}GX)"J˝թqXjVL iIr)Vq$08%C$଒q=dL9IE:oPB0cw|(ZI̮<{ޣ޾&Z>b#q}YSV*oM28ߟUQu n;{a9X[Rk/>^*44vޔj59__Ÿ|\9~xתB Sk5e.&ZsVW=^Ջ偫^(@+] ol,xwk脵$1uLzw)Mǟ2ƿ}gN\hyG`cDN4LpSl^^.Ɗ 𛟟<8hJm)6>sD#Q[ˊ,ws?_չE&Ik,@o1_OnnMu$5֦%29ܳ'k0iL&^M6@~zr{msK |-<:{}ibEuY=\>" UB# zu)6\^"VykiO _}.>.rODma"=3?EdCn tWZBӻ.@?͇X({փUs.@6}5Jh8 = uCW uc4NxQn`S=, ,'/넮-͊*p` F'TU.. 'B 9?pN3FZWEHXGI֓ 1f*8RXRYwנAVL+' mMOShFKYqhu&&ZeS ʼEK `+.okʐgQ+!xyBO[Z&c&wu6WV$0O5ԉ3s(E=i^ҌdN7 Vē8{J+DR;;YD3Y*.!!vyzk>ji&P{g{pID7;8HՈ[vesc0׍Oqř= m$eA`SzDMs(EFI%7/EOz-?GxިHLdiVݎ.WYPxhn[e.1l=g ʺљLCG%0G%ʯ'=V %#M㳱،Ĉ)#V.CWR۔iyyqdIfR3$  1̓bB7&ƒ"B-p= FF  tGlp(k%,X,0_0~ΖŇIZNxc3_j/KE%Px4˪wHM"{$"6Os=6UT&IEÙ+:Cg9Y p"9E^*T0e[~|9Y4Mw:M|_rn39(MM_6W[Ǽc\s$5#XΈM4WgU>[uh]?r<e,wB˖-օTHpۿW/7aC{İ^q볗Ng_%[:&w !'[Gf%sAHV. 0fx/^'F<>A\$xd S?)e:P-`4 Q(}(\Ȫ Yju.`H__BIq`T_9-˜$^V1oo&͒ c-nzuS6q9_l^ 'ؼca'C~h.ODJQ5e^3NAdbAw҄l"Fv42\Gs:/N-e%>}ɋ w$eλR䶽:Vi0eS+8Y >.GX$ <%eJ.)U4JV XA- M]PͩUM{n;N/ѕ∓HKOu}{Z{L=1 1Z6UyɴxDa=QM i:=zl0BcCTݏ'I Ud $3=+no=_wrqw#c d쇛`J$-JݒZ6/yA&.*V:݌Vp`VL0!I3\J "8_ 1-PsG5XL%Q˜L4'dj3!(LG*Ox9Yx/ :? a;[%,;ф`9Ab:ffN/7~M! ]M΂l7.~n  :+RV%]e˔.]]9C84Ii1'G lpppau'0׳A]D3 <%Z(LusTCh'|&FHcr{d e؞Ճ8r ʔZᇘ8u[j uTrŨtY2$'B)Ֆ1\s+$סvL-^`ш?c20*wS8r@ߌ~^]$Vm1,/Q^jJ5B@;kO.u'X\hhh<]![.B0??l?n~xIɮ_ևZZZZ(j0jB. &^ 6A/|pi8xn*%e@jTQN2A"9&Q%03Mx'9ΙIfeY[IUA#zQe+M5ʝ~l״|̣7Óu슨*v4nu3[N]ǐ,`Q` 3/\&{&%Gi! n' 8yCc0sB )oZ*&+)7- F# ^¤5S mђ񈂩G1^ednr*tZ31RfROw 'Z(jTzEF"s®kEk1ˌA\T2OH^LcZ0D>)1'B͞רً `MaIWHO⌋ 5{uY=j6(Ut>hkX"pThQV+GxKo 1»Vu̅7%U+9(VA{nƗPiN_74чDn tHM_?L\}C>*A HoVkB>>J֢>^ *{H& 1%Z@]ojɟ{.̳E:IYƁ= g ~)Ӑ۵vR׸w2fGOF>Mo֟7\-\[C[J>1U[M,#jεX.S}@Yr…h[5֬xUv৔hRp:( fuweZc3:Tc7kRp"mV뢹0K$4*VfkUp3VT[=ɣ1ΆJ9#_ŝR):QA^mw-vHhQ/M> u:WQ)ҢX-ʯݔ>EPYo1u}O1, /Əҝn9B7ARǻAIb%i٦K4 {Qkx"Lz>hՎԑ _W{ϧ0 ʹ`cl>fʱG)a :1ʭX hׯih:Ht-EwD30Nu7,Ei:{Ч;k쀉`;klR|PTwT}! ,Bv+){4lRi~Mgk>~uPK&߻Pf_}mէ]Z4shWfzlX+8?ޖ+T5iW+i-A3&`s!vKReMZY`OŔf 7X 0՜h qW<\[5zw_:zG 1X.`@L OrywYŦ.VPυ]]be'"X|sMh6\Qh VqY^~6ӫivyicgןAigHcR e臘DKL . ICTYΥ8U[0‡V<kcJkZ2-9ut2^,%XQ)QxFG&j&l6T#]EYV:c8\rIcU$ZjJZ<ǹњxpƑ rE.p~ryc * XT$v i;Z+G8D IQv֨c(R\}6&-[E(/eDD"1kzWϓ^#4sZ**TB=,QRiWіPjє j2K%oC/5X$NV_jF=Py3Z`oꏬ>E[kvӋXLK!"PDhݞLжs)B z\_f)_=dؾƲ$,j5Mi{TҀJESDC{qm qƝd(dM3J~FҚXጭ*acJRq;JR2*xoɕ$iȫ%u+ K:T5*qر0܍1t_Q ό>qͪv֭Rd̙QpdzL@X`^~a3F!O?3#y 2ol'Zq%x8% "TLpLOP?³;*×7p\Юx_vu,VÑj*$텾. eUKԬqWTO.dj0t/ȚNs`j / jO+^BhΩ-3 9ms_ۢ^rB κS@Jl2D]DEcM+ Y˽J.\o%24O )!>~t՝\`VѨfIYo} ;Ŵ|kzL?KHTV *k{|Xa MD!F{ 2s1֎)r泌8qd\x`#e [ATI3\J%b3Z1mY2;eI* hj_ŵ_k)niM޵q3뿲ЗEUxs(P EӞ~8).ut^[v%IZJW7ݕVr 97gr 'f7\7L U:E(ZÞV1hWm?iwY/J@pY}uDaCVQr;=>h N9c -C(G VlDN0ݲo[S-DxlO Eo&7Ot`K:=ケ77W|7WQXǮ1y. O>(n1ob~5~8By zq,$7ëO]Qoq%~<>.~mr^10 di'S̝n$ AhPh0nҺJbf0ɮ|?ÑK Gjh ?nE" ڀ٘>&qM<)N VZ$6bJQ4Dϭ%0+X'!49+qJhq -y9 -Δ=Q6^xɵ T2'DP+hHXЂ(g8͜WiA]SJ8syϸd90r- vb9FK@J!FX.4£;1ϥȴ*^Ud$Z"qPh 0+)4b2:M2 !c°@r <롥z8# h͐oy/a6I(y79U@X>cP BƑ:a9QO>ܐ?YTÇ>{ZpЉ rތLFoѯCJBI~mx2/ﮮZ2%kwhW t/2p>:>wKnr7ֽ/ Zfj?u/T㠞\dOŹhXP?~(4`|\/\愃w+ *3 貇ihb$t}< /12&W[bs Qps=)֠ 37WW( ](,k|am p/ֆrYHu_ɱgOCc)H"bCJ7ɣQ&VV_D-bYRrR`4RvZxZKog Btd5N:W,m?[yk<{z \OFSIʺɂ#ʂBL% !nZ[J-sϢA}%;n]hu\7ñSkUp$cvu5Lxdzd+&ء< i$2ưQ0'0&#3 "6sJ#el9\U @NB\1(9p-$㝣,ZKLbugWor `X[kxЕK=<7:^:=j7d\zRtZHsj!61fsyn/`us>kuWM= X~c⯃ VIWB[oJ,(su7M^MXյnn[fB"1\݄%_]f?u.K» 07$FHొ`49`Nc^k0ChrɌg*z9vחhg6qxIv01nc<'cx Zr {X ݕ9$J$؟x'" 9,";wx+nzOcT=R )9"kN,S kh3t t+ґh9SLfvxNLP)! iODU Qe\rG\ pcK|q۷ h:&7+{žB% tV-gwA/bKW("}5S\bU~/øRG JTxG\_ 6\Ƌm:#Ij,Z݀02#^E]UHqz*5h ˖fE,( !&"!dK M`DbA)w0"y0R ps1j,` X`@Y3bzhp^pDbƲguy!"uXT^pp6*ŭl Q CRN D( 6?\v0p:+(N% ,-IpȆY87Iou2g̦@ YŻ~Dxl^{L.7|2m25ѧ)%m~UlɧF|~=@ x/V`6drx|37j}ȃQ\`B6x>.}|`%.&|Zop~SA%iR\ ד5DiLv!kJnO-ՋUěIl5K֝#㳲B~6CHg)&eBTqkSaBheeߤP%vļu3;k2g^#ew/roD][ZE>TEAa.3A vUv VˋPC WU3B뎥}j#HXdj|V=Y%ļ}3s1BQx.KқBZQ bw'AG Zk zִf8dovwBU2)XEb fRrOs /'Xv%xE!'D}C*N{-up5$%\)9C\ #OQeՋ(vV\uwЩNʂSYޝaκ&_aNX |pWD_6~/j# 엯MabW7ޞШt%)b{ kRɖk'@Z2љp(Jp19")!b:ԃP|pS|pɵ¿ifmr)yz{\IvHߚr0eq J.a1aMÓswǵH.[\c1U~sJ s*,L(#Rrcq1V5WiunjbS geGρ8>Pt"q\! Hk哀mĭT+ֶ۱B4n:ˮ!*.v"ƌP|54|mCS# (qy!fSZ/,y%1e ?cذaa4xAw`4_0E&)CRڃǭ3x`3 ěPH`vYq:^L!11k0C^zPI=@ǼsmiY,z?aԦff hU ~yO "TE sg"k6roXȳ aڕ 5zR R֐H>"ߕ`u#z$j;v%%;ZGƮƓzy̺n7Tz4JwWȣrո ۊ3Bp[#1۪஘B '7My[Nv u\'^pΏF_bZDdi>>gH˧P 3ޕbBIӪW1ܲ9E.xA&wK ]O9قƓ˙(-Z.ܽKaOY\URB4'7OToh]_^ܩ@ţ[jT`$*ù1Ҕ#ЏL(&UZHcTyc36Xҟ1 e'j!sP{[E{lFw?IŪ c0.OL- *ʍ&g%kJty[6,᳛i e`V+XP.eTc6$U/n*dm!&{-xrSx|IaTd壾؂3Js%4r W=XSlO'QJ`t )匠ͬI(`J5.;[[TBH2P"!ZjrmFcH  1lSh}?CIUބA,DYsX>͟MA=7&|{$nRl2cUce MY&U:cJKxEn4w4U{χ3,""|S8;Yajw!µw N\`7}3-|+<~J[b.)@!eT"@7Tsx{5"^FidueԃE=z2yeR䖪\ %9c&L2Cq)ڸ %S$H'VQ0{LN@{x;GkmVnG2Ery8d6yd{+K>-JԒ-")XV__UbU>'errdKB@JX@$2'9W*- +`RSQ@1Jf1 =20_fuz者_+&yc[q".(Dکk\hL&72XyQhU<W*n$7Rƅ x9.ĄS,fQ1:zhhc(/Sy\s# VJ;t:l* N=3iЪgVv ^*@I c1 myfȸh}g }JT. 4h|}.@Jjyt+AX[PQzo7Gw-B!sH*G6M!@M*Ѷ%hZS.K 8GŨșaL DaDHZW=ԠXneߕV'.!Y-aH${UfP%&QP$#aB11P"bAjkP-RMMJt} [[*afh2_{Ye bpd⶘hv jeBq*.Y̐ M *MFc\qUG0%)U%4t*k eK;ᬁZ'p.܏&h&)1F!KƚI׋D6lʣi汸сz}(~2?,3e˪ aʝӅiw[(;v4;X"<<ο4 #P 1W\oN?Lˏ6 A u[Z|Ǖԕtwë׮:8z/T1\-oՎFN:JHv̊wPt,ū!,+q'E1ȶil!<߻?~k22ɻVmJ=\~u4v55z ]%Kj\Ǻ_cr-sKp*ϋ[]z3 \7_=Nw &0pR3p$)l569pT3&3y*S֔8_B7Sj{Lٛ{WIzMWlzĦ6V.C-I7.dʐ[-\-j?~b_3лM"MQGTڊGqֻDۆ|"D[p-؛f:py֊ okB3AҘbvLZIEmtd=MѰ ecNj͊MQCxX杫WN;ߍx~A?.ό-xSH>oZCd#.E 6,Z UXaP UTѮ H tֲiMbuߦZ*:/t>ئ~<ӱ fϮJX&jPaވze6FfD P2&N )ϬƌC܍"M|4DaɃW\?$ƴɏzyIo9sS}*#uρ^nzTAzs}IO LE;.:\ѝ|.2#Ⱥ+LʃkUo~ԒBv ㈪;V7 j-%k7]|t`69jH'2p+$|~s~K %ʮ%T:\f$N+ XԈ,ZIάJ|H1%#M$λbt":W6T>?;%V]9w+9/XX)kvJmh%%V{)=5B68/䙳~6b-*{\x%DcLmёi͞ z?ɯluqٻ&Up6waq<ȝy7m6-z“D=Lƃpi1PbilQ&"4ꛫ*k B_N2% $]ZŢ*!Ay2ݕ+jZ*ݙYsyS8jZ;C@;å>Yv)mD4@(ȭj#i.'l=q|? 6\zzL?/~cY]x[լINoF716fB "rY!`yP$VRV U>8`[/. <a\. Rn(gJ |Z_ r0!/7yk \#Xe)1hdžЩ# <6N>:jʉ{4LYof "{3.HLfo`sEȻOI챌ug ^7/XPB@Y-!QaaS^GD6Jv:QgQeQW W:8FmP krd*"׵d!E Zy]zm?v" el.@K&d,Xs=2OS4R'O'㻛J!RCi\\o^J?mY?\?VWa_>㨯{=wMV3gu,vXDenٵH~͵HyBkQcZk6`.9^ wh2 2/u"=x3EX#^FX6&h2,f7 z@U1vՎ %wHCy ̊SGm>2[LƎ;*L:!V*G@AQ-:S$ EI Hv~J`-l@Y`$#爴$3gnz\O$ #SuI W|!7s19xy*S'iYe$]C$t2BU!'#x*},{ykLZe@ωC++){ηf!%ypS5'`}s 6q"N6YlZ7Bث)8[J`%[@3L1o3Q;Y <9 5T|Dܼ} QftZE: ֤3f©}rgBPY5 o|[czDfL l/ !-A;^r-1,dWd]ѵ#s$1~&/D\82ZȌꤺS$ F:Gu)OQGᷧpq[RH Md .X&PKR<o2~xL[YkpuT5h[%QON4F:!&O|TA-k}q_K8k%!D.ZC`;_vI^> Lgb$C&0_# ߇,vR٬Jnui.o˺9w/Ẻr8 0ZQd+7 &9ED :iOk tukBut՗ Qhxیx <*j$PqhCr$~06@ؿѳ! MVJ omfOGP l~A}s+K c>fpF^ @ֿX&䘌;'lg#̬VLha8XKY ˘ZuL‰t&Dņ8X LNۘ 9@;qRN v(-F1r2+}w<ŧY,NJI;~kQRэ$`qwJj7r^`E@YYRY_=s m?rߓ{IEyo#hG?_^]^ \Td.p_$B\PzܮQ&^ a[O=~W OdK>Jq<"ϻ 6q1J›I:'YXN[FTk2!_ O!!RH;eMͺx FQx 퇾lգ;p=)$4(~r̢ l<澽:ѧb*;X~N/p^~|wrIϮ.ٺ4ϢqXz񰉶xomF،5 o 1hB8~Mtq~ U3fp3pMo4_B3$eP6:L'D*ɅI/2\ż>%"kd )`A3B%! QA8s ;Cg"Ay绞##x 9`P x,IA) #PjgQYᘭC(fDNx@D\({eFw*vy 0ő%]zEAbp:($8iM3ͨa8sW-:/jKUA !(ǵć Մk(J[U?~X) Y)W`ܾ)GQ@u.LTTǪϭzqU_ %Ido>1,*P'IήdRzw]~F8Xr1_K=%?NΟ#V`\@/oyL}J \]4VVʱj%"gSQ/8TDz{£rwN}MT&{n]1:]h1)-u~Gʻh>)WgiմF NYQ*NqY\-z=O=  1QiLw8C8c[穡3*"8pFWSGj*rtD¤oׯV^^+Nr@X=*7`BK *+4pQ8t\.2pJee}ieg q\*e*;xjK z]xbho53WPYicN $IH$RMM> 4(1IbHP(B W%7}Bd*9o2 UsTLX댲7$u4~vqy,H]_xIpXnf>8!T]\17񓻽_O>SμjKA a{/0"G$DĿ.s֥-wi&BtN'HNң~1?ߝ|sRn9Sza&Z1hfã+8hbtt4\+w) (#ؗo%; ?Wb:#ߤǘ#A@9Lص #- -tG-٥]]:YՁ$(F"&TD)K- B8!Z, ř`#1DYdtnp9}n @@'Z {d@HxpdnBY7M@3/33̺DV^|y',/`g\-`i0Jx3}Ayঙ]Ư}2o_4DIAMT3lGγ@`ߙ?9Ǣr Wsh0F响/*X3Az;2<7PbVP8 )0BXK &\Z<ȸw~:t:̵ ֋rQ8q4e^fQ!}$2s0rʏatl$1?G_ (Ue~̄{Dr$cy:0*GM},00$'9TV^;9uTXk#ϸ58(0:bIѱZ{+ưmw;%l!?𽑙FJfr` DEż9w+?||$ȃ{@{0JZLk' 2xN < x{!S~x&b!*T"$KBxE L0"KgO9DWYWpqSSFQ=rcNwCbFPs>Yy/ fJkm&SH@VPpz5`r:ggnIueÏ~q\IcF}{/ Go`qNRvD,]JqX=V1pWq;MQ$r_Ϛ>g}Y*rG8?_.դq0q3IHq.xEw%ٛqNi?+3t秽7eOpv*u_z_\1Okm5缲yrX7 {:s x_Ȕ0bI)S̄rX?y?ᑭ@VN1'9ŧJTY].a($溠B`@3 +{[_kô<8ma FC,GiN)"tU2YAt,C`dE]F}&㍍֊R^4:#g{bg&ddq^B*3v8`.r:ooVeyoaKVS{Ȅ;IM?W}ic :6@ f=^5<\==4('uឤZƠj9r F.`.l0U{h-8weQGffPl&,'!UlI?%qj҄kTŰ20*F7í3eT 0Z1[ u3&lgRF{#ui;`)gzk{=yS 9+u{x#1yjTȦ ʓ{<(D>C<l|YV,]~wo.md;hq=ߺьюٔ[v1[ n^+Bsv42S26FJJefR9?˫E""s)4x) Bb D$V/4~+ 9-Sk}!]w?L0BՔUQO=?" ÒWXJDYX1_MM\ P;l KB$"rAbq! "=KτSw$8ו1~?mXꘃZͤRHPʷ+$;rUXG4K Ra W.=VaDJyRx p1-$v [f$ {nz"[XǬq¨@:$1G>/raeQ1bm­$ōym]8gI Ng;(u0WHP,7m_24sDpW]ߑ(?5mTy;B!6#ϗaY,-E=5!ǚ5 >| Y76oK9wэm#<ԙhƆwAH^}bq8B#=9Nq}x}p.%Ck3HDϣ1z o [Mf Q1`yz^A!b17RYL<(W7FTTNUFWrg\MY*Y뿖h唓oN_,}[/^H '-("ОBcEN0+O~R35|s{?+gsNb (o8>(hJI[ou>5Z $,Kx*;ŚWKn6p  Np<*d4^Ja`0Rv*yRę,غAV0A'@|cv*5ug9!cR@wS)/ ŗK•n>v\7NzXЧZOof}(6Z(O έ* mύ@ /xAN0`=.̀kӃdZT]ۊZAOm0:gh AGšg{rx04bKw1&rkeA*f JUrqᙸO, mĝչћqa"~z=ٳ?וD/_? & ǵ7nܖ'63g  %Mp<ùD%E? euAi2vh|p$ׂPɼAOnU¹d襃S*l`:3= 0ގdT3<ϴdqhKIkO:NS?׏ Ls-;uԖOiRSJ7UVl|o=LU^s3WmOc16L*#U4?|Rٗ*1}g],ԺS Ȝv`E;["/܋\0IIOerxB;O 8ѷ\– d0ZhyfZR2VW j< ԌQH̘~YvtfTpƭypkp.@0CL]yC_~rq{=_l^ںvv?o` ,]ݐtC%g*뾴 Jd݁1;5`aԎqpMZU9j֊9U0BR s B8r(˦P-5uw8Lu<64 #c!6`O7IL85^ۙmM@"ht@:9 hE_r^])jȽYM?O-u@ Û\q'q".n\h*'t߭P4#3Uoݸ7t5o֫v,Ȇk, $Zw ۵~T u qhMyߏ; AkZin=M1Vs~8*~tв1y0:'wq^'lQزգY\gj=U jI5Z~ ЫoS1k$Z5]r9J#KDtοt-Rֽ82rNyGm=WD! @aUS)qjHkIEc$xW3Dޫ5&cvPHQZuF'j4.*؄=(ؗ\7Ko4ukag#l'#\==&zM8*CUk(5#{*Ոa=.3$ӺӀ[Ne؟m+);/7Wp99 vvQ B)^K &kƎ`W Yu\G,MIW跺\΁? RF'4:u' n1-WX S0|\b˂7AwQL4[K|s#RM%}|Z,n.j~k@&qH":C537v_9["bdQyޏvkbոA#F9PeiBIxpK6 l 8Ϣ;,<)+"Y/wnepE鏓!`/>=kclk6/v~FC1V }/aY@ o eSʵRUQ Yλs.Aq R{JzQKr# y"%SmužvAb":c4ngDӾ[DS[E4Cj7먖n;>Q$HY:V%6sŚWdf#bly̨i<3rWڙg cJ99JJV$(&ـRW_J|;Ýѹbd:<8E_VوfӬ*L,u Z3*vPzS Zd[It {3JAP1j&!k35^/1"cl˚n(!X2Z9Ɖ~ A?1>%ZU}۬# +_ݿ!є=~Z>t˭ڗNx?-hכS k-;Bx$6] ,:{ߕ(z,e~&ɞKHi@}[GH($LbI{‰ s@~~'YFYFYFYe0GmW$:JD{re-qg H0;PV\୅ҝB萔\;`+:]L4<('VY(v逨>8gϨWTa;Or>]ԧ$q䒂0D2>U-qDZ8"14lKBK # Dp'ԔQD ZiP@)\v .W#. eZJaTR% (HlU0qb%$q񖀎I* MgRmI1Xq0a bR./n3a]q qԨtd-#fp~#MOC1/dOOuRNJ>OhRA6"zwP}{s} " 9~M.[,|  BuwnpsGFb$*8m=%b >t2L}nL)'*4QT%W5P*҂Vr:vk"q‰0[`rT҂E-f{*ZYFĊ 0r" mA:̴XQ02or`܄H,­p 5h+Wt2UD"ޱ o): yi $.8ALw8U#W@`hイ{sdsfw_DNH:YM+DU]Et|{+2@ɱe=EeK,gy KBI|4-ޮ'JRsֻ%z,? 繳0i]?_H{\~Y':WRJ^=^3*--׵m?ͪFfea\07)(D``4SbR0$URý-%۫?R{gUqWW[ʴCfJaptI,5ќaj(㴴(XMjf1jt7{XÈ+zqglyW)Ǚ_ S%fU]E̖ˇJl_\+YW~ \ 4 4 4 ִ.X\.*k?n=LyL8ݬ2k{qCTJy'.WrB ߓZk7%*gc5r.XbqSQ5x2TFSiMiTS#cLICB0rLH-4 Yy_,1NuF ǥ_ś-/3zim.1F 9Mr}3v_]Ϗn̰ "n{Vwgډu3Զ+2j9/A9|#<[{Iv$A蠻qÆ pdG#Qq>}#E#/i |>v]Gl30+*inr6Eo; ^: '3gW8^,('}z-oǧoҔ͙9<cݩ.ԛ,uM)aP)9beJsϭ 4X#J&HYa4>pJ=+޽n3|r/sXcj  8pDzKbu7R P si&r:"!]8T3${]-=3̀@^bj{_e7Rd8HA`@v3i 3' !$ H"Dߩ XC,=0v'0:X\'??]J x) %DϜi8Spm&/.Խ^']tl|Q+R<4]}S. u\˫ e !Ɵ7E&Ob>OuN),϶K>;z[ay S1|7p~~X20xCZ/c_?Can Y52 ݞ/{/vsƃJ\}mVSF;\(zZ \L%o2L8iUhWb"[v| „)b/56"ʼn0=-K nSWΎT㤋SSbQDmoӃSEՐ jJs>ftM>Xu:`QɥW݉ #F96\_6N d#A1S P$% 5s*;2O{sʈY3 =|_o"1N?[zN߮)aKށSF6|64J5띈lcS!t44`7vخ=vC]zF8>mR~"q6>ihmE0e5sy9E Pyo] UX-V *╫TTUޓʱG^H) P =[Q ã֤}r Sa.@yG\6tfj%#RY𦥭 YQ"xI⼴V 0S~g.ep#ʁЬRօa1Oւ=Q#>|%W8B܉0oZ}*J Ov<\~ڞ3( 1kɜ6òԘf}'dL% \AA#<_V9ڏ'ItFɐDqmk-h!P/2oxƧ{N.?=aρ\`.C2LihBeI,1剴wy"۫wǡiq84<}dh2lsΦŹx\;G\BIU"ɘJ9D$jKĴ" a+5 ;DCb mP,RT GLZ\pXxgб.–2UXZ_P+Vei\ϪIɵE5BK͚ڦ r!fLU}FM?/_=\%DmӸxx|Wk0#< .,|rpcG Xu}Uy_GÀۿpYn! Y҃*2GewSE*F*$]}T~]wӁrfftRHy;L]pmR^MȬA7j_sahdž Q/Y2TABzdxxtS%oȳjyav|/͗[oj-VFg6/ooy/A'!P-p 2d84W 'Jjܴu/&'6*-p[``D=ozUo&i/C!ۆYw' 00:Y.y|ál4_=wx_=Y Ff?`z ftmqweLW.I^Sl'U*Wr$Vxϸ\eP9#,qV _֬ tך99~MGL!"=m=Gr&"' ݞPVPdVx6=[=iI?N`{|ca&Coə/}`ZSd<2z23|:se-HI4_: [!nb8ZIxPzw3 9pab#?<() -' >81wy\]I7>bʘ+)Noa_2ӂ]|M>* YѢ{B$39#ElZsTrm}ZehT!8DsBT{N43OB[=Tƍa*rT(ԀktlddJ`s^7,d5([l ba͕!1SUn ŝo VY*AUe%TxĂ 𶨪RYWEg'[9I W#JGH/&3n Eż& kyQ*PH*Y+blŋ08\b~ТF`Vng%wZ:*XZL Iۨ?]dz-36;5"Qgg<)-I;T >`0&<)u,љ@ K;>+ۉͱXQ>A{J3}2_Zjo.@jMNKutZ#xTaX |IgGEņ6kmw~\G{7_\tY _Lb^qGj S,Ʒ>=/ooE~PJ?{;_7͑J*]t>WyԤ?;6t}ƾӛx>`zZOfitZidΈ~Va<.cfFO^ 7ݿQ)ES{U׏MM)sG?+ YoTW'~LW]1H1W9wH}XD[۔{$k`Z#=9n.@s;F“rP˰Msf|ʴ6I:h.0^M"çb<-eB'AXU1u \?weoxKP6z(Z?la$W>ȁ &/$ !9Fg&".eRH|: X\~-.sop_ÌKK-nqFW щdw!%hTI)v)IARz+98&ݳLRW`UrJpZ{)vaaY2 TƠJ7RزD%8g »Yʘ.+|T) rуs.9h7:6$.Untaߑ/-+9⿔;4VHu [j-A@L+e <HAmjOd' ij4r.-pS Ϙ4z8~xtwE {Oe!p"hInL`"Ltg6 ùA¹OiG*+[|ނk% XYWaIJPJʢ'TMj`jh誀/銓++)TȠvsbx* ]S7t&wɘ+gș} L+DtA0_ '<}f #I`pGr&zg^/iZ|1e~kFhx]迾_bm|Q*jǹ󫵯Ϳ?h;6n;9+b^J d+%x bɝlŔ4Jid @?G|7o̾;69Cf]fkt#@A]C{փ@`1ǐBwi$,uۛ]nv}_\n}('49,WX4*k5$JRYҬbH*~.+d+YUTs]",tBmXbZRBe@w!#M2YT<*;be ª:%b9x:seĄ 22iv vM|B deouM%r(nZ "aF<1cWS$)$ڈq{!?n:C\A^g + 5([D`993f Ch6,iVו6E$IloOM47sۛ?ɚPY&9&vV _=ޭ}a1|~x ͯg?ܼ_Vo2;m,,iCR$ 갊8|Rۻ`gCG% }mIg+54M xUΔJs }1'37|NEY} kmrܶEv1U!53v֎]85E`b$KꞱS jzP4W2f(O!AD'1 -Sb$) %( цRh䁲<# _w?mUǘ $x2Zhef QSW?gSW/V?&*4/ {Զs nS2sٹRKHBvТ_EZĔ2հ6v3 Y >}FBvo6?v3 yP펢'-c^,DxbO,_&{HA'[7}dw )!s>KA8DZ^B{^L-w;MuSLf:DoO Q: p;0K0< 5mFaT*r}ˆd*Rnc4K\xZ/qTS .W?/R6 _6K ,s*KhI\ v&"WR#<^W .A!"<m== =P-[&I`E̯f$YQ1Z?_>!_/>y})o*awv)=-clbjQ %YM,u"FOfNND2}jb2d;G/[<;PTcD,Dv~G%,nOMu TȯR8VY7d"IRAeL3}nTe ;6"#AΞFl+^l<(a^`nT:IZm^ti8% ⩚j41:XV0_حRw:Pw5`CޡǯWg5؏Byup<I+ IuӋ~;0תI =-(߇iB{p~ `MQ(28%^>=K>Dl^!q?G%ޢehpq =ZAt"KٖTi$sTOL\)h/=MWw[/rDٞ,+ɬ~A?hs(B=$UN!-#?$S1.n4xE0fD?@a(YS9#a縃d0CFd2az@3B`۸$)ãU$v$L`;@GJLO"%6:H%w_5sv҆f=k w?r! Ӯ;o$ WyQAԳTX! Zr@1в{5j_Sq1!'!e7 J?~\ɩo֑e& gݓb<5ۨX":VwUf9;7ڽ"1N+y$0;%7(71{W:}W ;EVA& fr#1b$e1DA ZG6'|rDMq=K.%{""Ⱦs$xhrMwߪ#$A&)#T ;&SLIbىijC 3 `aB-/x޺{\@՞^ѮZgHPSP G#"$ 1)7 rFQ,1MHRmTb\&  Q#5jYmBĴ*f.T"Unllxb$&vqe qJ4XD4 ٹF1C؀s*0+xGKmԺno}>`R:UpFj֓,4X gpA`C.|p{{[[&'.6[7UƛXmõЌwkBF/\NU>zB;!T F Ɣ>pv/ݚb3t>w`RrJagޭz6FǔDؚd5q1P#)בy <&"Bw!.d>7?ݨ %&bijou0Crq^x\\ )CR h!ˋg\+=^`T*6_7 2(=[_ \`R,6=I\A Tp5bʄ$ጩU7iYB@U \<;"ofL$P ,:D=o" 5x u m!_7P|HC ZF|{O9-$ځWVa&pۛ![vl pMEkt& K2Wd,Į y>-ٟۮ/9Y=܇xq7\iWk̻m= $JńT DXJQ0%F3 ai71C'd@2K*XQF`t/G BtW62qG>-l9ezzldSK f7PÌ~ (eXDt@ʼnA *v !I>DGNc|{_sm^Ԕ;>$啜D_9$ o/>^RxϊoyP\R8[]-# >~˷eғRVsJR22[y.;*cXެpzaWJ.t@mܓ7?ҞF}1>e o<%l[-|~=Akm#b#wA';`{1O=Hۑ=AbD.*'.W~!yy.zⱌ!c;짅+4z6?>vU-9UӡQZ6Tlkiiy:@(QzD%z垔Tޫ[ ~[?wOtwJ[{ {YˇEWXPIݳ ȭ=ƻZzQ[o§~_ZpVWpX{ظA 㢶^ඞ=b%9>)u1ϳ.g.g.7{I[Zwx(M};<8qm@8_ih㸷7`80 ;Og7W)@븥=r)/cՃZm3X䨤F+X(RFQUUB-qT@˖ժ=<BSxI\zԎ;*L`=s"0%G@#,ǑpJ*VB+|ZĤ@J4i{6jk\RVmq*?c{{~AMXzU2-Lx3{pwi3G⻫[W{{X=mW jL5mhi@˫0_ Mb)PUdeJ5L'["2PqK(g'* Nz|,F@fLm84UGfKš#ȭuim/D(Ì1;e45LhUVYpl|Ji8<:Syt*ktpi  אiB  `ԏ%k߈wz|LvSOh&[ uۂŰ ut{Bߕ0S !3_Bߕn;`!'Vd;m% WG֮{jVwfn-@vy#Ŝ;c37=!Jqlw[ypk*ScU2-٨- 52O3Ou~K1:.`T+JLr:uܟl6 Q F\192.dc<"Vc#0Uz8:N.ISƶ0p9%c<8%cdSw|q[`|0y&<Qs.ϖznY_!t_~bȓ^1ç9|AXr^z\6琽Caj>'`VijB821AL3u@-,"+bI"ŗrHdN@L;j\!W/JiNj3NC]%&DFFmT90JyI2)QO$*k!uHiЦL";sŏaO8%ydj8Rާ`wa)f^e/SP돸{ XMyxO]p7[~Hv֓+:%J .hB)O?b?e}X,___Md 2C)9Bte =ЩE+>SrEʼn>djNEv+!Q󬚢AM"~L LI\;1o>#5[\dv7QVZ7 p:&G?Ǵ)Ύ!*VT` |XZ'*ntcE'\+zKbʦAJ2Fk:w$uOrp:fFn0TNe7~vk)B= /!z/u@4C@.:}m 2Wb.'  ̦i`1f% =94'rhkAƛDPnv]2C(1PDC@U*yh‰SnyMP5PV'YU3P6vvGvS>ڙ}oA-Mql\t2цS;OEx1||VE'E|~~= T\T^'8rݕ'帵!w!A$=](:L6Xx$#_]95~ TN;N0cDwe\7Y>DO7:s1 ?}O?_ְs{k/lY~iSMm#X>8%ہuBEoX L H/sL9ڡ'/FTBDŢ*F>+al5jM^|ߜGsqQi4^fZSt1:Be:\$Sy16b`TLn(+pDsdi%E2l ƅ:TZFb,.RI)c!5hNIؿ{ЯuۍWMǵi5\Y-8C?@i\Dcʱ) -.2mSfIJmM?18wϟ'&8^\f`v5eFA0W}RɠNI0qQ/De"_a΅*bNV"NUIoӝn,̟[խi;ypzdzD۩ ~-7NY,\]ibma}ݕ8`WA,-#*mW";#h2aܢK] = o pnKxK]D&ѭ9 oIV0bԩtN}&;dN9(Hx^juF2]o$7Wv t[R{$~zfse'*GHҁKwו ^jY) 9c!rJSk6?RQJ$sZ,RE4Xt-K> : W|:<7Ӏͤر?:/=F)J!ztb9h@{7`hhD9\%sS޵|}ZU(;ͪBLAUvE9BlYOkUixܻ@R~}1 }tձX܅}k?"\##xނB0hTO74F3X@3PZ:EKju*EnEǨY X[O EJÄ..7QI,ͤTTۉ۴IѦ(S)#Re^֔)[yTwxY2Ju&%dktCv1=U?x~e  EY9 U^Jdk$gWA.Re+ Cz7>m~諅gxBMc8o>u5S@1"ۯn d_Smk~d*G3DK)I %3¬ʜ+*<*g3fյvyMIͯכl)Xw?ݓl`Xq\-Be6bKԏ~ɞ)[oܭnV5xo_mg&/Vy'W(J^E{L̿=~VG*356[&{ʶWv|wfP)Ln?L>#a_Б'M3;֎?T5Ѧ%Q|=ؓl{?H.-ekik&ߩx?ٷ*|y.j~~EiiȧdM47_YŻǿ/a5o?N;vjS^:t>|PMs2ml}'뗋+>J6ݗ@W|+'GG萵i9㮅f:\USc6+fH>A) @gup/ @>:9s R; l8k>RB6.×lsmY3oÀ<ZZPͷwqݶvzk?|T'o?L;QZRQfȉ @f}I Wdt2!IQw|Э7uGM,;QE}np%`xmXNT %Pذ ^N!m֏|׿}\bEk>lr웰^VAWNJN h߲z-.ӗtZE^,ލ1Ǯ' |{MRkؓ dtoꛑoZYY8 JK$"qV4H~ZWPj[&Tcy##LIsBی:vEi퓃ƭ/ʼj!`SUMuХJ Ty7=Q=dcG(͌% <+ WCNn<9ə H+CUFj}"iR*|t:Hb3kgX;NZ5D!|ĻPW&+O ,0C6 VW&s,4&^ÆNk^O?7SJ[oͧJP'c0C]HdRs*i1@ __9Œh!z~[r4_SmH-IZj{jYa2:JT$kY4{*AJO:];WieG tPd8+ 8#k܁ϝi( /8d\NaȯyvLİV24ᲵS/RJ5u9 ̱>VdC?ЋRSC]n}9.T |K5A\;"kucR-n>OooQ.nEpZuz7տ~Yu9>6 I{@o9!܃̓jbYjN( fhV;'& )Dz&N̬sR]J?8Wzo̻Ѓz]ޢs0^7-JPGEFr>]^Uc8έ5a,r.oo*Q6JSJ?=i{>9{f rxxn&0xHZGut3%:FvgJ45VĄ2CW: 0腬d|[WY8ʻcW"{|C? 'E6TýR. @!)H2 WzUy< Keh0R$ށÁWcY_ʱ]ȖbB+YҔFɂ2(T*O$jNQ[4w"3 f Ct;W|bwJ`ƬM(Yp gqD<>y ޖOSNg:C{tr #'I!,:q%Gg-%x1~[,ΒQ:/,"oiH1tn׺kSLxJZ#V$%zÿ zs?99eD#3@yf !uN%k U0Ne I[iyNaI0Q΄Qe8A }tFv_goߤh7C#Il\DݶLp kst&4?>c(2S5^afEs#BGœKd l'a't/2GD{d2W*,*Y*U ~d!L#8O 4ݣ{lu{*r?@#i48(Ekg c[.Rerg!Tz)<\X@"̄Xk1Dɖe^T)}!+ʲ *圬Q]hFAx<"ZL]6Zl E^0F+MJK<_: gBaY +KlkY*/\AѠ4yC>fسv!RIx);bT?ΔFPBɾ/  mፂh2\Fji!<tSYuׁ:`1y,z<-H 7')ɁIጓIg}D ݠS`9I'?7I4'mp$ԁ*;%־{4S?C2֚$ FiB_(s bWl<!PT?~Alt-Fj&lGz>Iy ;h%`h$*ÏS8!> -lWQW^f=<`9Blg9"ѥj<ȸ-|;.A&I endsyzp)&yif왭 b/ GV|ߠ[STp4͉ly C}Dt$z&5c`ĕI-|]׷ahjƈ rR'/1VRJJU)Ѝ4IҶ9Iӭ :{JGJҊQS8ڟrFE}jFĘț3 u9;O]_#L0\"$1xK`Dhoqumepm '+iWy0[i*WҴ %Arap |,zx}g&a$W7E~apnte9[\{S GG(9ܲ¦-:y%#EOd>I%Pg&:2@-9)]3߲,01 "xxt#h_ё^|F TxM` +Rm%s?Z޾ʲ٦nF+ L6 ]$ޢ%` .CF=Irb_ȽnjO.8N> ֦ %ɘlp2X{mH>NKAQ|  onxܲsߝܾ{}:=77K\|x~>]lB?_g~*^O??vW /iCEݿw5=LU;"3e47SRLN,(l1R,_ympq@9i9e+w,v؍O@Պԯ$JĦ7 9iXOJR:]B1 e;^F"T"}iGexwsJeI$ -hmy/G,e-lϠw6\.1f%+v23l&;sT[cOsD,uL3:T߲¦Qޗ!f0AIbƕ]0$ ?s3ӲB捒aQru$-L0h>XR@x!U =(Ugm;dMy. b#J3߲B#0wlbz ֡D$lYB2\ {WSLrg.$V|]93u}sE.Ԇ p~ kydZM#咊-ނӎ5od i[ek5nMO7aMMnmB]ѩ uFI=Oܭ`P׷]ۈٻv(8$~ə7'_g=I@xvbkv''%lI{͛g.pgM$bsw+.yBҫHVP);2j%(KFITyg)Ln sR|G$NN]P8W5h"6Ζ__{93#oNtE@'6qw}/uy6\őoV=ZԧN8/5ÿͽw sAE 76gl,S(j猆zt9Sڻr1C^ZE3`QMJZ,Ϗ8֤K?I03AL*SH{}<\<՗Sբ7|Q/ DwV1<Σ*K3OZtu"dpg!pY ~;Ŷ[(#I m`(dAcB C#S2I HC[Wg=!)ᛕlI"ƠQ9HV#,PY>Kt%[u)՚VoxK^L|8?|}Oe0(k^h!>vcL0ɑ?vlP|0êNo?7wsO.@ '~4֞Jӓ {y[3!A!XYQ >=~*_Oǂ@%xw i#zqVFMhYCo}KvN@ٗhx3$yM`+qw9TwvaVr*T.xh1Ȣ8%Ils謌'ޕKiJR8ђL8ʱ8,;B HBsR ÕOv ©ݨN$ Aj&E!gBAқלL$JZv=9ݣ=)P1^8 Yk5SC*ZI jB'[3rG dyJ*%5{jW߁$,Y!:`]6R1j>J8VgO[>&*(Y) g*[/9ps^"Rl%E^;э2jKfۨ>_q+ϖڬvbOdUĹj P9k1$g"r= pl`MHN:ȋL" kXaL$l{|݆a}w{}9T-Au39ƚ5V:I% !h#p dyp |$> gLe'M|/}\29g nn|jvS7_q6+Z)ðmTSF+m[V tT\/zhZY[QW0hDf̜"2Fxx5YeI!QlBG'Y~w% wؓ3P !, jxiOI ajhGPuܱ*=5Zi38]# ~G*#qa[s3t4[TڪT(4;aAHҪ,ٝnGmXa(*L49JZ_ zw1$"mf(j=GbcxCY+;&&KǼx\ ?>M џBu':;O2 TƂ6}ia>@GM0"&/πc@- Wti6T+6c,hzW?W>+N:4u[i1 "_pHZ5m#>IICqgr Pf(Q{ ̔kdDVLdb-tޯ>4XB(LgQnf!P =+CJ$A<Ӳӑy4:Λ <,nƐQ؎MFd&tI{|xJfJARPRV L͋HQ}Nxxݩ4Z)+v4kj[{9nXo-{T n?SEbB:`L~s3zǷOU7p"uv7fx^R{k-5NF9/N~ T;/HI;/Os=їF •^B{MƠŌ{gUꑥ&%0 XJT>>~1VܷY;kXiE۪T攘kvf΂3tBZn7yUhzvhc,鉜46oT+wɓ^<t&t00@i4ncċBEWksEBy i'E.""EW]#T' O'HsON~g˺mtgշ/W:w&eo?&)J帇'*qKl18|0PЍvRr1p 13#C]6v(y&ЂTΙrZf :aE=UDٔőBTX*$צDldǜŀBcMIM[4;/53RV9S-2hI0Z!fTSm&RL[ض¢֦fJ6:q}2)J2ez5.PM)H &ҹM,Rq!fHrJ%9iy$.@檾)6A`i/5i7L CZ"X)46oiېDǽќ 綃I5`&3|-8Gmk&_Zmع)5V)?%WR|) 2r)- |#K|?/nIzYYȱ%"3k6ƎΞZG;vҏHbS ţ9o]"d@64'L1k T7'qb=o0?o\"#]}X$- vd-CEFap9]jv`p01q(d[I3l>:nP 0 y:FXCy\̅9򮆶ue~2PkVOt;@1m" B 8?ä3 =uo)C$d```Űi{FXCyhWV>/-^Hz>Hn?:WP {FxTyl%Pq퍤 ם5V;}QjL#J EQj"63yQׄKp+Gi_; uGi߭#s]D駤\jOKS$RԖ(m\/%Zgc5 ǚdZ 9y"[{`U堡joTrAX9R"GͽZ 0eiBaɠ9p-JRږDWBۮ;k*!RKsf&uc.Wh ) VmrZB@-JNwFZI%PW;vS RJ0$Yc) Fڦ|lb 2'5BT$%ڄsҨ%ZMbL&=u~ՈޭCz|f,|ZT7>sκ3=y朵GOүY,\~=ğnɍwߝ y٣]zپOx6>o?}Dzڷ1cYp/POxGj.D;-^mkNӈؓ6/VًNL᭝l;`]cJ ~un'G;KԼx]TMWkD9k316 V:̘q2H30dVԞm%ڏQ oLxza$|\'2 Izs^c/–I @A|{Ey)8{sker6D;PB?e4_RS^ ƽ-w7us{ϱcr'<ѝgeSBkJ΋B5~U8z]]i;uiFb{Roһ.iyOA=+8MWȥ_|8^r{b )cA^z,$@T}11R&K.>@Glx:~n7i&Z &TSAtFenAʙ"Bul"K$KO(C7`ͳ+A W:α~UKӱW-GpԴynDm3p`c^9v,XуX b3Z}tPcY0rg0xN>-WRSR+}QjۀۯMcX Om49@pr:5\qMIrb&Rm4gGM)T^cʴA0GXK&K/v cXYvX!dVz0DW0>b Ab,+!n+駤\jFוUsgǝ.vRSk(NX-f3)X'r)}Q+6<|8ןon~%3?׏x?y1S!6ߖ䗧fh?qDxDxDxD? K"I/}Z?>l1e cb|7}yK~[`zO̿ޏLȭ::Ž dS0sHp0d.CZabc_7^}WVo u.y;޹ᛓO Ymp]%?civ'`Ό%s0ܚdgfN_t6>8ha7LZ1lf%/-۫~x4ѤoK{TE)b_<3ؕ2Sxt~yI7aGH[\4-qk]C<.9ػ[̜ Ge]PL[I̋RLW蘆Dﵖ^~LƸ'_c,VAkM"zm| 3O|Ǣl|rP#ģK'\{I~5r7TNbWJB⊆l.lZ DśS2ayj-j >0٪R6frl p(CɳP\ cP鄉rɀR#93m]JJMSB B9`C[xL5Qͱ]V_ijO23,5'F{EhsC9 W·BLMEG{")^R[p3#%HAU#nj%,Cg>K Djb?oY+h\􀉡 O3'IhDiӽ}Zh$`ASIilT&!RN`>G]K;q+=:ʅ ~b˹mEb~RqvO~ڈrܔdwũ:^5֮ p?1}Yʉ<ĵh[kUxM6*nAՉvG ][o#7+_fwy٤w0d6QVHg%U]$RER] Gvcr ļBϓz c &3kwJϏ9P&qpޚykn'+DIBHx a I,'Ъ@kJ%9J}~ؒpu3iLWcL+~=!~eEiz(at|Qn fb9IfZJ1vy^Dʼn)fg s1 % XIzv-kPa2i14]az l lW"04m <`#?H"dr4Gdߑ1]pcH_܏f N * x$z:'Gz0 ޮb(2І@.Z6CRLŃ]ogv zƳsPrQdVJHZт!\ZHHR*D@TR4W5\OZ!bYhB!?/#%n';[ q}.@".>=#؁-I %*]5 !z ^4Qɟ6Al.5nN(5ukl\rw)]˅Q.ڵ+tţ6tJNƎ 4`(j7o`ۃ!;(J;evCk&ׄbⲺ_fBZ Y⭇lGcqA0!&$_.KjsA2p[ZIt 'y(7 $:(rr{!:ASrXMiQp 5)0cE]%\!llіj$\zL~jEb3 -Yqj~2&X#i|1N):W8fcۚ/V5Ʃ˒>AƁnF{5=a2n 3LLEb/:aD(9YcĖ4'0 9~N ^3QN`hiO d*pX!`3Ŗ151ش s{R/C~nEn3pQ"}=J$tј:yTrP:}P8bG jwM%bN$ c1ZH%w<#k+S&0oqG6*A /+! R5Մ CVἄ25**)b顰UdK([(rAVA㵢@e]DekUY#JB5@e[]Ⱥ8 HTF*8'^MSmⅾB,ŋHdDmJċ-bi:Of*`ċ`uSaf{`e5u7K՛90%Ipl~̉_LX8aHnz1ODL8w{lv_U`pZ]mK㵒f4=8b Z_4Wbom/1_81j~|0gl=&b8zv. Kk::/#m!hLNH\W.BgdFЊ/*/8G9XglyR`X/^E'}WYSh4~Z.Zܼ^(*W%IܜSLEDX0h fKx^bT2z +M4E^NFgZT<M=xL['6tƫE_E~!/ \XNyUYmuxF2+u]7ă+,;KS%6&Yn98RwV w3!ozE l?2 uE0GNᔫ1¨8-;zIv>{{eՕׂrv3),f!v ˋ=&'ǡ *F&R:ltXOwKK vwt$Ž$BƣK$6F9?s)CmFgGt2ysl+g>^R x(<&C.>FG k ׽,2,;^V(+xщa^<#SDArt';(<QMKM"GўHOɐ+EkUl\n-L!Ӛ~gm^DVOWF@S0[gz8-zj/SA|kjߚķ~|J jȋ2ԳE51sS$))+XԊJLc+s5دoc-l:w*ZեV!9E T5(5e!7#Cq¡)҃hs$$q#RD44D%Q ʢK0J8:(yO?*_};JZ5N };g v{6bq(I l\7fU?{}F92%Fg^B{.g4\RK鑯D,ѩ>v_n1oF1Ǿ*;%v#+=NcI..2(c`xB=}]bL pJKcq(#藢S~T =Hk}8mo N Kjv,_Qi,x kSII͒OJyi^c)C8'I.s^>)t$c߉1a"ˈKI$6Vi;na,yVM?;0+է|E꧁,KFņؤ )>W|jZ1T$J QH(qJ#'(J1hI &@L<|zTC- ܶ#AaPAQ,O2%HR 7Yw94GT%ݫT2?4Ok7M SE'(h !,ylS ͲFK}X& |n?jW_FͶڪ3JlR-oJ9ۯkSԛٷCl}^A^=+i 7Ŕt0Lפ/u{'o/y^4F3m)5ql%Muz/ݤ^-NYUL5nZrV j0D^ %Œrhzh0HT_N eZk}xsIo45AXv'-An5{з:vnnMrs,ĩ.v'Vo޷?nZ)LHN-ꩀWQ]N mYN=@C/Z7V<P YWoٵ0LXv5}|; K'VtZ|3-0$PWZq3}k6ӷf3}k6ӷ߼DF\͖ӡO,\ _t4SJI8#Y!y;y.o* mm@cNf#cc 2c!)5G"!V@R!ȠcKq@ t{?ft^ޤKUSNkQ )r"+rBJ\V U1NASn18~M, T2gHHTE!(uQ*Tb`0U@P!p2 %0JH[˶zp![GsZc0b{ )$۾e ,%x1ӊFlnEKH'0'"Nkwd oMX=ێnՙ.clwƁ,1u$yg7` fҜ-6J1iUGjU%YLNCP,Bm}kY\,ϴ!+VmL#՟Fl6w7 R\Q|@ZN7dmwӅ^$>6&R~mڜn tYOe s؊y~ xUmj>#m{y c<ӪŒ*>Sm4GRN1P3= &N~1s.%2ゲV VS~Y#Z2GB~F􌥹.CVYȌPN{ v!KƓit8A\!A#] %E!iMvy3;H{. WϺFswWb^:2,RF ȍN ^Cˤ~Ѯ/h}fH%oa|m7-Wl8܂alXkZ ۫us(nV3V_g38O[%ћ[ 8pB29` ܿ=SiԄuZ2?&\m}K} {ý]=AȓG;z>+wnk㭵bQ(wu[.|zdLܳ6g %P@إ%.5\\LMgPlLaH_/38*N8b)c`8&R~ʍh@oh?i5`DeK1!Xq,U$hU\g,Y-%]]l+xbK@ӣ6'z<@DxS Ijogj{#_1aU, ˗]DM8v{%?Rݶo6)JjAfl%V=bU|J w@pRYQ"ifmgNF(-#+yJPC?(#Y} zO+zY[bDkxa^yYcnN+ *sD[2aG3)'w5E$bKg\U^`x<\- 1 ,cqqDlR ajP-AT׃˩Br\ +q_D!'ľ!)Rmv%eꮟ_Ku NZ'{SᎼ}[/4ܵZW\p@vMo 02G̞:O{\4q[o W|bs1-/qc%|TEds J7shfކYNT 'dJ{X"uɱ̵9ևmoŶ kFhT:hxnLyɫFy.\(N QBd8eDFJ/V={kyXH4߀윤5suCO𪦚2EYcb3[ Ԋʖƕ=+?SW*4-v!Jz;Oo ֎Krt_S*"6f@ $'rS,\ cǾص4fzE'W9= ܑ,/c~w]wř̘AQ:q:3t_;3>=:SgD'9#Ws36c7VL:Ώdi8 i > ]In_~f> Cii EE3z6r fb:Pkq>3A5 z 6%*򖂄 h)V*5hl鲬FNjI=Y_w˨qSm޾? ^\$`!$X 0NCdph͐Gb`/\Yw bnog|(nVn_c}z3viĠcٽIqe6qmXO_ƫ7zm Fxc{Ѕ}k(}wDN_~  $`U!giSx Ɩ R1V={I"F\ ي 9ݦTQ@)%;If ¤ysAQJ䯮]9}/  K: ~8݆$4HK:sm2FO 82_ens㧾" 9W D}ٴi8;yscy"ǎh3q2Ê++x0׏P]5S fQ/g_c<р_|6ʻŗ@"^ԆtQi[1SJ.-TQZ4E %tx5W?yݗ$}2wՕkM_;?m"E VBl;~]TO{wFUy5kןz~M,ӯ5r?f$ĹPvvhܬqQ ᙸHʹ8=2YER@sf bQY]P QeR3T LаI"jE#y%7Lм DK] cǝy7+%D<[$H"_[Jg*% XT*-Ӕ_A&*? b'WN2X`$.!%u/(: |]Eb>g\)-*gG;7]\żg{s5yyr5\_Io| w| (󭗶Z7+<\,gpZ} H`HWKĆɁ.x"1~ e.t]9{tZ1u0MA9.؛t%V܀;Mq $nM:u0`mHܠ+JRV䣇*DJ(uUצ*˂YY3d|&Ey̗wfBA5ox,HwMH%{pM (^,CRUJ;ti J#BmBާ* }X! NT%mdksÌ=j>7@z.B kߩc V~@͘Za cU qV\Gρ kq<?0SU[ԡ"=HPb]sf@z_eP aո_)jhr5'U2RRjDftvbISVqo [6^ )l!WPI vh͜qRR؍^.]'XEL^H2< h1e2OܯLOEc^ ]f6D'L\ae@Ӧ͑h/H  Ӥ{2Snl 4ޚӯ' _9@]+lQe}Z/ww:f/`EgEbCl)%GqǻY*7ub11E #%&f@ }luj Ç<Oy1wUTw*jQ?^ZE!*@&6A,?7:Fr?%ĨU ?}rP% T ՌP` Ah9},E -d)>^Oԡo@uh{$n]ɹ/W>V 6B26aQKZc&WCݸ?N`HfEGM!wF-} "g]pܯOٶꦊ]w\ŇNMCݗMCd/o'㻕[yxp>櫽 S߬_M8R:fm!Rx3FȌ\]=Rn"y_DO;SN1C@Re6Y=(3HK/@tb,c]JsDPA6[[W?t@j08eok k!Y3CnhVXLYoVzkU;2AYp5  7n,Rؔpy,:XuE)HeDpVYKCYmB\(E,XmHY ::Զfєp/ʢ}u`)\2o'{&d]D݂d+`JĹpj*^C5ffA}^+Vo9 T]ޤYC!ͥ C-02wj,Ke$78y $DmqV cC^0:4ѴlRٗe ~ؗ%NݗmC"lBTb(+*IRQk[ V`JW+Th]]kQ\RjjRlsD e)js~804pءz+z+FF{*A)4u"rg&2tVF-jk&ơJb7(P7xXSi>ӣ?wXPv5?,Wōwϗ'~\(ٕ˧jw%c1l}k}iesxL}Q2fLs4"5GWJƙ{C@F/X=:5_B+PGa/Kwaid]iJ@?ݨK`,Mt˦"/w5]tt W6Xsw6`~ɳթde:+F\dl,.$mcCr-{`Q谼LJsDPzs쏑҄d'eЬͲ]^nI3j1֖hfs=C%pff(GFLj_AuFcNJ.7e;h-.cZ>Q/NqpR+D7;RBǯ3E'Ux6=@+h D]Xx7E-T> Fv͢)o8 FvShͻyz:!)ϚGQ(b02Q[R}f F^{nDWüր"Fz;92r.XyC3<ŵUOU!< ^YeROeH1hpklQgamCT0\P,XyZ>+}ibBTբA_^' E~V})֕4NU U~y*Ya\ kR}gFOr,N5~BH ~,oT ӥ&_IkӥHsTо,P &ug'2m,5Kw-(Vם 7T>%ߨR{>lYkU*:!)):Nz7U[|*|{/gSy3OVB^9DC0Ht~ǹD7BuoTng%- 0$*M/2 kؿoc?J 1Nk-L+tnA&qʢ%iO_MNd G{NLյjaxkx`plTopl}BA@&i?f_lynysV/i}4(4̧ۧr؟|;矝8.\2jןl wN֚}Sd= |˦ēJԒ>Vq,c/Oj%8P^<;hTSy")D3+&fq1O)VRr XQ2%\L%)=5v6^a_C)"JI}-Qz(;*;CRlK @أQjO^(eҡT2JI}-5=J/ )笽(=,͖{^4J  @)?RSJpeJ!_;ũQ Ԕ~_z(ZiM5(eY$8A4M((Fn<U0F\ cq.bi9PW2V$IMej]X"2.Jt>Xd<")&B^z< [#9¤U}@`#y$31sW>XvāǷ8pjMٷ:[xi*q H1o^syy7:r.S|9b:}3]ֽ,$P7˴]z(#mim}ÚȦQ^"ȠGu@CPK #fZ9*x9.nݦbBX͢J 4Y,7{zwZ7!aZᎹNy-hrϟnN)%xt\?xq-A L ؍|ZXZ¾QMGmі|ȧQ (\;!tƔסg:֙BIR2NJf1\Ri OvY1sj3`qH铓$: |'xtKpM$ht"~8}5zMRHسX.G1ZߎX? fo6j{ݘm{TI|{gKM'E|>.'!? JR`U>a+2f oǔauM*\vuJLBqgSg PqNW~vVx/Z-ʳQAwhFe-d^r/W.n(D,g{Z[w[񦏮h?_C~\]=hwV^ S{N{{,lվ&a!+&OmS**I>w炉[RK,/k0&Ljgl/{z.hYY-iaHgvDS.P;; Ӫl=#VŲԪ?]Z)4d>"д?7i8` ӌvĘZ@lЩ(ͤk kAO |?No]=}U~yjP5A\5l=WЇ= QDoO?B>og&_w͐ k$bgol|e PM۬Uaʟn݌b2Z+Z#GٻեB',Xm5ՄD7|ouSXrncϷt_+6\o_ƐanUeœo_KՈ߷-C=aKGH`//cǵ*)Vu6uJ~ZX/}7XͷW +ã+' ߘ'7?4F,[Nк =.F7?A0AJ{].Tqۉ-ss~4јD8"mb_8E2oZ}{ sK!0GٖR'/Ox'PJJsARyK9QJJTRϛ`JFB|)b) 1e EJ(KA,F=B) DJIwmy9#;},gt{ze.Tbc{}ɠ~Hɉeʒhn;*~_HUfmIi3NCRmKMo ^؂rO6hn[j H8S~M=nO8TU&o h? 70 }O]lպHaE {Pn4ճBκ+hU"3Z*ًtu/ ; s].k u_:+-O-=ArInLWn^UîVcSwQ}{t +2*p>=}3ޙ=$ovVEjNk[Ϟgn \ .g{w#t$bS.~;mW'нNGϜi[AӺ7z%vX!0q'}bE;mBVV6> [E@f4l6ё۱Hh%iC:t؁Qˆ4dq&0Ey*JǂGNʽ^ϸ[=Ed6Zú2LV/]Q̿pS9jrQ.|I/fߜ$Gâ߽"e~ue;'Ň/hdY4?ɳmoCW)ʕZOZnw|/ڨ|>|[A׷$2(ݾ_R id˅"K^ִ佾q%Jwjw #lcYO6h!ƍ<^]_qb{Tvf,cc;%>c㣻sډ=xoRja'f.~b <hx_L+o> trZA~uFZ5ls@M@g 3aﶥPrZzZ MK!R`nZKMgRܴ'^tN-%(5 (FNf$:Z+1MQʤk!30`$<D$XbJc!EsdVQ+uJ:Jd)ZɿGVF)(P9^W= / u3|A]X։wHt`ps1w]Ğ᭦9ȫtꆧ?uW5)L,0L™bopU'O7U'W>6ZNV,hdeF)Tjs/?܃/R`3yNRJ[L")vmR|y>*o9Swlt*٨sd#dSE #-2'*G_J+$0I/}U\LEp0{Yꎦ6\P\Bm1m/Λ6"]x uv圭U@&N*DQā*x}/^|(+qکVw5:-7X>7`JR "ft%zh6[bywrXQ%**s:/t{Z}wPn*U(kz;@$ÍoPl kK)-PϏ-t-L0!"Rgae`LJ}9BލUإtk!SEMdarôiQ|+̉=l$F687XS:90k: \oP?T5eoPx{+t*)F[:v@(Cn~uphӖ/8ERJc-JHDSJc*#" ^F* `1CgeJ |C'ԆH0qyKMIRcI RDB4! ňJ D+Gq2Sq!*\#4%i2 NΘ={!H ,blojG$#T  #g/tֆr[#t`#Caj*X3uLq KL *K,ef" g+$ΨIiƃN ؝R 5-e0reHJQR40Z 2O4+ Ti4ʅ-/Fi @I$B^W(F(fR=Dž/ _|eo _O+{rmnOԖ۫+jKN|5Zr/Ǐ:-5%;e4reM'*kJtknLEVss{晫l%g=_E2z&tw`^,(0 ։c9 +nFAi8`^5zqIhz\o%AU%a4EHJnFm`.{c#u13JItS1IùSǡN"] 9{]NB| [<#u!j;"X2#Ъ$c;S~&@A۫jܘSAwN߭}v.U<@}TAyaP* 0[p62V6fn Fz u35΂l6y &p-ժZ%Ԓ!\<JXZ,_u47b(Ά\)#sޡ ?ϑksTKnIW|1y*]2*.ۑ@/5PYSgk!T\>6cO=Ek!bDF!ʈ )MPBB@蔀6.gDqK"4zף0F?Gvt 5$.c_gуF,,nwH_"*7 ꮽ}b/}-zX${_qsG)IF˗Hnagycb_e~= '6ak[:\m}F):7諶n;[n~ZHOF&BnmmF[M(7]Qx+=/M"9= yɿ.2c"iхB0-r _?kħψmWo&vHvDq%G]/S5ƴ)ho@h 1[f'y,L6tY 1@q==hohih)-@ u?Mi毧%Wsށin+*@1"NJ|*[M>.Qab$'<\TRjpe{,؄;ڷ>η?rXWhY-SpvE˚.U@uUߚX @yb[9eg4y֯R7=H-x? iuaX?>ӡԺv ӥsZ{O(:%mOZRԻ7R P/)NjYWg%;gnfӤ6M?ZktvrYEwwc( =PC7t̢udsm(īd֦߫|߯s+wOyY'Lf%// YDeqRؐsS ]ZX7AKgnu1pQbη\mvnMh;W::@t~ֺnXNwTnEjwպ5!\EO)E9(_m{CysOQh:waL8dQT0|j_[}U ԴYnh_^}mNջ;Ԁ19[o#žu] r2^mjPXqѵo)폋6h#`DR 룰(*a7 6eh0t[ڈ]`< _TpX,M{QLYq |Q'Y1̂Of;ɂ׭_\/8^ύ})φٻ޶$W Xrȇl9 . 2.Hrf|FU7))[((z /?++xF$wF<~#pYKg <{983c<5p:8{7ťpw0l 6,z|Nkxz8 `˭iW`xm^n/βgn(|/muY`=3nYrf26d^-:o*k:mf8Z|{>}Ά_-+b.캕y|܈J$k"zҀLwfJ:בLoU ^hiuOo!%)oY.^yPv~s'3dUE]0n72hN{Dd\G3CޚAW?FsV5(}2QR11'@e~Q vmQ.ItO'}ӯp ݙqt8ld7Qջh ~I!~7Mfi WYd4a`O/c;V~;{7~Z7ÛۿN(a7/g@a e5y??exY(\xڻehM(eZ>?ȚJ&5+ MynUP|oCg$ nG7[x6_(z<][>Ͽ_ o2O`t+~ &tܜh%|scpK12No)ڂѢeχ5ڃ;s[YʺA\D釈0|o,j7ev6 amzϚ_YQS^uj9ÅGKe;%'0y0f=XdV5f &wNf |:]t̎}/ pyj7^L.kZa\f9;w"]뒲B͉Vh}zަѼ7NӓIu/\^^6a7jd ɰzO-dk-i:i&W&̍'`W_%fc[{X?=?}rp]n(#. Y Cx1Z0CĄ&LLO LCL@Gm - ͣ"@A]p\^c#0RQHT"b(}TL>L33;{˚u!o;HĵX]&K%8I'm }$5Rh{yQ¥,CV1C {cvnDuj]كM))E!R#X"4 P/bҼ4FB("B&K| {?؇íLn&#wL,ljJB5K]ZFC@撑'9Q, œ h\' (&RaNI H&4TF飌<5j$n::, kjE, IaٞUpXhy5ObОW1"iG#1ׅĭ[cM\JI1U nϓd *u{ !/,#:жgL#D^|셷S0L8J6~ah222*2@4{J MP$)SX2TGG 4G,=ܐOs)8}ZJfe@|ѷToCRuqG'nYeK?KipLǒU7K-gz!gd%d1o9OooVb MX<]U!23Xmc U{IFֽ+S UauTc 3}b.[}qK_׺I15{u'o琍]7s ,h^M_EjUs+[w?p.grJ6aq}yFlʭGTK{2ץwt ,58\IiźZR џSz}q/.M$΢H'6ADҳ>GQ l~N}^Vց\sǿb2텪$7 df/AjYd(36lRlo C*j3kz%Gd^0eoYBsV”ԭֽE6x:pNwxE3Ʀ4ݪkunu ;hL!FZz:pNwx"]{n|[!3qU1~N8%Ad<|W>7 ^ ax,)8톜޽RrNi) Q^k:Q޺1HNVQ3M{2QT4 v`:Vݳ%B˖Ąҩvs:UvwV,t**fnULjΊJr\cq]e 0qW0hJ$FO8rчa~(f sʼT jӧ!quu*g\7װ V+[~g XcZwwd:Y|dC\ "Imڇc+KUݥ\N]K-lXUl=T}}vǪ cu ;hL [?N?nVmVՁ tǻ0ۻU@wJ*3okػ[U@;|!g2Wrnu ;pԞ9my[fJ3Ԁ8>L DǶ˶dA ]~lʑ\lřZxg{)Y9^;:;R8IJVuo m$) ?Juck3\O28Sc s_ I_dM(2f|2琏g'>O/O-e]|5 fg}~&1+?aIO5-pg_r Dl1Փ` ~<=˪) Cϩ1o0[[T.SEjEv6;}27RRNZmR_K͐JL)RNq+PMuC񢔡rx/-8Jb]jFQKc7!Cc7Z)yޡQJJ0(eDJMaءQJJ(%( Vjb77J h@6/֥GR<@.UR}+zVnԪa֡QJK@)n(Rs:_z( Yj RR+5u(=j2RfW4b@)n(RS)qmyn\jycF)q=lBnz)q=RbLjR,P7(r_vRJPjf[?r2l%JqgziFٮ/ې@SmRtnjR:@澾 nb]jNP(% ĎXVP7fR+e7JtC)ex+PJJ0GR83;qķRkIcF)s\g~qGp(eԔnQŲ"0)nJrRK=7JrC)ΐvRӊg&[N$Z(So~1RGQ$gaR+N!yqx8B j!7T!+uG^~Tё {aDw(Px4Jyo2qg=\E"y-L?LWma4}d'$peȗ7uh|ӻnNkDz/ْyy4 ̠1ݛӟ2(J mLv1})l^.JˏH,\^.٪VVf?_%RS8re~]$@ TP] JwyP[%~yӻ@JznJVW%OQ*vCAh2d?i҈}Zx-Ȑ#J;^e9;1EG4 2")35`߼aBr,XTuۿϚeFY^P6kś[șsˢz2W>G΋_mZGn+?SGxLV7vIPM;V~'D%5c4:8#^=`W1BH%C=7 w"|&7_},+z葬jW7(mzRTUE撖*[<%lD@-I4b;\}%(ku, th2`Z*rFJ.D I %'4Ԣr@ Dbb%J CSLX* R RiEA&D5@$u ȼsCk[ ,F<Ϯ.*SW]X~ooUXȵRU^p57Zph_,[Dm{6{_M~w->|Z˟iNʷmf+2rC} LJ$X:0l`Lm?湃(/vo~0O!YP'nlFGSs`]s(}x{TZeTJ@%]ҠS\A88{5`HB>n'u+{f;@~Ȟ=Y ɬˇmIpB'9>b+;w?bk)smhXBbܺg#g#xutM{7W̝Ao'7ۯ׹mJ@4%XrJ&SfH:UQSRR9vY_M9.>o8ߔk³"RB+-W6$|aӯ唋ޣIxJLA5wˈ]; ^jH۠8=xsTQy;pkGTV*j`981$r7͙2ִs 8ѮAh„@sౌHCR)Uʬ G-E4M5>eY o bg'>kYS0nފ|2E;TeDęBg@S JS6IMXfY+*%n 6K}R m|:_m1q@9DfWSm!=Pv53jHDD=)EFt/3&Q(wLD* ZiL\ObXQK0&wϨn+ m>34z1+G,<]QCn)i Lٟ1qLTƱ6vhҘDLkK]P0C$is2O~¹i tw$]F_)@wS?G߽**/$,^fa:u{o*Ս?g7g+, WWnd9;D5#Z>OȤD,Wt&L)BF+?x6 z>wv}9nڃZOHS-rgOJC/egK(8n8&jI]GZ:,|qS&5z_Z}B<'Zr)+_-?o7U}eWY`9SVVgtsR|p v?0wV߾<*mE;9E'Qm?K=J0[1NqթjS.[ CA=5Ur"tX;;0TefX7ۉ+fpnWߚx&6o]9 cЊS.Bn3s7BU50 KM%'6$]Lz(H->}[J J^i{_/+hzY FAjvjnpޥzpʃeI%8<w\anTT3FAN=N=jB 𩟜}Q@͞:mPB4ӡZ)|;5Ck͈MFQYC{v,u@CpTZ\x.^e%'3ZU8(Ӌl6ONԊ쵋$fR\r7]&f}jy%Cbbwde<(\/Oó SEFD13(/?(E,jDn e yn/vU<)kB^`ea1w tB˨w `T!=[M4Ȧ~ƹݘD1鄖Qǻ `rk~ӻa!/D۔@vڕ]&#b$cJAjWqz>[}|,/|Q݃V;uˍB1~ S1z Ok޵ܫ_lj WM 0"3wNJ{HY̥e:!n)2,NTe')M 6@qSǚQIdPWcl; Sh;CIbM;ɴ vp&$% 7:5PHSpk32IXLIAB&s]=QDry,SQ& :4Ȝ)GlZ4OF$W|`@1سS nqVSk݌<3PE)N荚IzԢ1$&1@̹by`:'i'< yQ&N7 %V-:Vp{"nD"sExQBQI7"n(*9]=3NDDRk}EܜP R>NJmLHq{ 9'1C/\;rн@5d3B-0QT $@R:f &Z%TfX绵\Fb)Qw)T @i/yT޹!d؛Y-L !{5p;yki9{g" NN>68 Q<&p5Fjýib˭g?N[|wW%R rq/P?;Rgx2q VMz VaZu)8H?/ LZu >0!@C!`j-sز\ BO6Zp"XQ_>fuR_ݱEpOZ pnQbIZFٌƇS{yfa!/DClq#wcG!J`D}$|EZn!6d3}+_nNhuېYs;ޭ y&ܦX0e3$|pPL\/<?)X603f6|QxeV֙HZDʁBN=7%(y\-6) AJH#iAGC!e !Ii $Xl`%0mI@؇ݦnc۩ʤVJg If))j&TRQ`iFn훑s1ׅrEё2 ]?3/˧"I Wdlr[/q<tgŁ$^0JXd^bU^BTKZPZ1. H^W!+Yݎ;GYj*2#A&qS+%@%i kgEM"czˆbJKl L)fR20F)R*(ZdRXT8r KL5!D'iIhĐE J2#)c2e-&1&`YƄu5S(XdI$)$t0Q-(>3 Fs׌'iĕO$R TDnrTo{=NAcFy$z0 3(IpsPا`WmR)ڟU-ЊPi^sJ@yWW|S9wAiNNRn52>6Ѽ k^ OeQEEU_wpX|LLH@+SaV7ն!6XTZS%k^T{Ҽs~ݟUsּ*hEn5cͫZjȇP1?bTIj: ) :a -S;Sԃ@{mĐ3ʆJԇZ?-&G{ܢQ_ 2[e/'>0y卲𬯈8'P?h^k!9qww ϥ2&%Ju@ mX 7 $teP tB˨z}NE0hޭ y&dSJ~ƹݸw tB˨݆\E(AS nmX 76%#C5Ӽy+NIVBUr!/sʉ{|]үߙ E#VSQ)kmd%+[Jo45mwYKűT>'/IT"?")TDڒR,;c9)wTEnNs1Sh.sF'BCIB*cA]%(MQS0o>^N[ӏs׊W;<-y'Hw>7Ux}W/2x-b#6Ӏ=tyڦU#;q]y*7G8zR}2W7^V͗|E򒌕TNJ |q4/"1n݇k :i^UVmhj"x)\o+k^IW_HW5Ǒt}' nf<ݟwʌ6htyBh~H?^oywS:H]y}H\銺mv ȯ/49:͗cf).fv5Gq0fQ"{ Βu5rikf|KK@5mip@~-.=p/AuXHw5@Ct#A)!tD;ء珎ࡁQZ3vyq0K(܌]WiNt0|h^rmDz1Y.;z/Ϝpc1'8ցϘP(G+z43&yC1Y,S9K{1&wfxѨܢ~Ƌ7v6޽J~P!]KW{Z T'1?ah1D4Vƒ"!ǘ|e<\~;hM7%~yG4~׳|щxlx JxwE*=4|)RՈ NTxugLV]PB)RռfN,?) {{nÇnM/sojN. ߮ 9ttywJ2PX+ͷZ/jkmǭcc(oSZ,BXG*>M?\E-.9[}ohdS(^s)R߼(?=\̩K#ַۨiO xJba0`88(uһ[aqdͳ!u/H pxǟ Xkf o| ȑXeCщWC,O׼5n| =vzu>usr*p,;0z[\o\>09][RΡ815 'qoӃ6sM鬽-|rJk4B"Ze3ʀ/Bm>Me~lKډvRCr5 l!X0W1kdeGDXZ .W둪z"bRXTcumX{+jUM `5mrjW{PJX'J bM-& B='ߏUg˖{;u$-ms;Z :1٪FTwC7%?Ѧ6|wz͐frc'( ij샩32&TA '8c8!dH*Z1A&]hk͹N9)nƮ~"RboޓIvwR?j 1/O}_vZmҟ_vb4膬Do >F8t\p%( icFRfϵV S -$9zNS=Lm\TdB3`Qх"dRy Ŗ/l?@4?sf;?r;v_j\]O֮WO(݆#zZЃZ%(ow'.b?MbfOrk]-prpT7sc;?LQָk ddZ)F+ gXD= 3gk|`|T{M@Buv8 GXA:pcUt;=rƟw5 ' əE(PFq0v-r4ybO%ma+gy\QJbT9 KB=IZ3",!昖L4$?SbWg }̂8tA.?mનdmK& Z^Dkp ^pzD啧k0у.Z}%08;RC{ pdij= [McVHIų~#&$ $8l$>N0PKgvw8JtN<'Zsupxhks>7NF=o7K1C_jEzzVsmQq~Q3mޡNZT&(|WONtm&F aޔ|}y[|> &?lJ>!4 rv.˯~JAGײv xQH >_VK eNɜdsǻ<~zb7Cr޲?) % L{9'qI#u?@$\Z;wVRUS3?Li}iբ'EY 175jYyVAUgMzˋ*P &*)쩕/L0FţDOuI쟕lwQNO#6z7]7Z@,oF/;ݰNjoߎKݘݩŚo ]oT-YΕ͊6w8Un%Myģ~9+,eY::`jut3 H˜Yȩ$x.b >N!IٗD *1jMҟ/Hfhu%Z4^]TA̎EjBleAouD~MO4[mc f7f1/rbi[ *DcAc^' 7X&tXZp7ŏ⭑zōbxy eV x0fW CڗZK㊎̧:bv//+2Ϣ9ADF\[hc6ZD"'-zݗ:%j-u׋E]HԨ9eM&{97a_M'ATϊ9m<ISQ, 9-2iZ<-I=sгosX& M= <2mכ62? 6w{d]:0m|[fvUhG{{OQۚ4$6s̈8Q^ЗS>cqjj$ykV_g1B v,HK#Q&9woh޼!8ʯ2,F!Ef&C&WSGqY62sR{s6zON@z/eRȘ8 eO,SV{P,% ,%I/̴{yyt>1JWRٶ`IX 5M2jl4? @CUM0ĦE\UOI.4EV\NKa^p{\צ83WFҔND7. ]"4*9(ZIG[ʟ\g=O8dlĜ`KH dQ/XG^4FQC4+:PCs``:[/I4XK+2痿-/iw$Z,VH+z.Uդ͵ qNRCa;ZZJqȣ;<ҿ^3<pBT1J==}K}$dξIi3 l՗IۈBkL?dwGD9Lcka3=1fuKj1 %sސ\H@"L}7#Q+R3鹤ܧz5L6Qrڳӽm7Gs>^ed9nJ^C"ɲog_Q}8/3!‰&9MmWŲ,haO- @ WHxA$YH'HrFp47zȑ:<Տ:_mn/ȱ|3a-_VsfچXEK E4q%M6i`𑽰l;J0tв\K\U 3]4I"I V+B\TUZ`&;3oї<#|&רH>]a[d\4q`.)>v<4Њd"Sz25SK"2g JesDt:#Y/8-bX8 _֋A_%JIPoII#LCfUUץ< 0?h܅dUDΉK4BFXf8x8 ,=uf] :v`ԲgDC SX7~Ro\~e2e ;2c c(E)2&雽i9(Iم+N1ЫL\$#yc=[x|/MCK}>Yy:L|<25٤if#CGn2蘔+g}]ō;/UYHQS"iL$\q!{S\OgJ0Ih-꼺ş,a)SԎJ7s01rpC Klf)mԎ0H! >ӷzT9 \{ca<]>1:l+\H=!KE?}?.'= \ㆱ/_->b*:%K,8PpS!H_՛ x2" ?==b(say0E-\hpFO5Z%Kf|ro`J8X />K,(%D‚:`Z+cU|%%lPߝWz1 |8/gޱ$]7_gVXā:dC׵Wt͉P)o,Q"2e})-^!9Jԏvs:ݎ$s+j:5pw`}GtT2GKCǒ\ ~EPs]g鴊cI}9'=Ax{ӆvo,aYu42ĆYWSu.BiBd)2h$u(>ڛ&iyHl>w TG6֪1d³ ayCӜv*)g`:`" d˜7 CmGo;'TqOBD@kS`ō@0]!#hA8­\3۽[;Ѕ?~5V|Og䀖(D 9*~ysJO];xf)s![v. 0ICʅQr1%FUxQ *jଊ/o Wqa!FX+DJr[EɨtU7XL&~_ 7tF{='hxWa:%Rޑq(Nr398ďQ"*bR08h"HO|&ta( t9[ZSMYtX0 )>YUsgYܚi\8F<@ e4+Ʀh|;gM 5%!bo2,m]aq=M=6Ew)A>p3qADNEHp[2R ]λ,N DT"߅Wi3 (3vp9M᝾I* Ȼ]+v92¬8T S8 1dVj(m4YwX;r7z*L/*pe/9 -P?H>=-b"&Օ>Q-q!8ZQLyVb%sNV}mǝQZss5=x.,:Aa-Y |Ua0s'G(ccJgI z&[vIt7`> |)WJ> >-ax+TӀwQ&w:%2:-M~_+0 (qlem<]:|=nA$8gՓDTk#X T@p!:Sͩj>IK[al \R*UT A7SIb,D4ؾ&pf[S"MGb(;` w\U~%1eMz0Yt@bUj] 6-t];Z"O,ד$[}C @5uVz~z[}Fnq;ڇUD{i:~rS@ۑ~kF,$OZ Q}cY {"|Y,~ΰʖu祝h>_AگG!ۥ@ͨ_1{q5ݿ~JVseMjl]ЌCk0٪ r0Jgrv/|O/Z8@NPn}z~+1!gͅ:f&Wgngܳ^a/|+Aifamxq% 3d7 Ѝ0Z2З_On.a~ʠ1ojwב8DCE O~[+ǔ[w$sJL0H,̃Iy /|!Y&hy#L3.2]u?euuX4S}'}W֗d_NCrMkw?aLR^'}jrn~w߹%kvVcuP{$p9Xc}j9{\rgu%w3|\_#'{x* 9TR%3(m7@>xs~(}GQ U='r*M}y|71]MԴH]ۋUңv5o E)ٻFr$W,yi5s`fO[mYVrUu O0edY3]URL~8dra<Aϟ FCQQHE9G [Qn=xw[n33z RŌЫkc_-_b(ozbxH$HiF~ңߴ \Мe7i!`~{7tЭ3¾ Kt̫Rvd_C[ٮ6ܞf*Bkopims5IT/EO /_J*:Q.]FEvmjD1甙4dJQFJ8#2&Fxm>K>5{jWz1x?Vh`E 0^CXMP,cE#7^icoɇ)r57E؍MȚfd`JHY%)$XhUz,#{o. ҌqsU5窆-cHŪhR$DɌ"G>O<2e$^GF$Qb/AXm8<of"Mݴk8)H)6!KyASOSrQ Ȉ[Q{yg-4s}@Z "XOWqq9':*5D=a`4]CE FRwGQX͘aG;vy ކtzhnwƨ6j1?Q5b=Fr@薲#x9\_>|費{]x59Ti]x9]l(no:>|ֆn?)GfY?^}z}:#5@ :\k)j9YzZBŻI"J =Ag0ŗ='iKMv"M+*@ %%ҶGx,QH.uب)P`o}5J(K5q#JZMHZȁ Iur&b`XJw5Jk cmπECc gLRjL+#F47B \:`CJK ~!NqFC>+?!C@K+Hd Yoܓbݔj>Q~[[T"w#>G 5#zI"΃ `De^`,&@Z(G0FD@pc/ub>/&TBh-ȷ/u0`$rzF$g!˓$ RC;y/0B&~7KAFs<>i(Yel *#z}֐ZN@jv}f~Ux 9+>Irq+5Ѫܸ=VJ&e)r}} ꧕r~{?PzeU?U5qW{_g뛵Fg/,.q/Y0|?Vo7ob a>9)syrOB^UJ9W3Uvh8ZwU)m-;T"Ͷ~{\N5bFrâV,u篕XګBBK8aԞel6~k\x%oMrƹ8^Po/Wf!^#PEtjZAhMgCrw`w^ $>:]fcS$WIPL-y)2K ZgsznX*jy }8ӈ3EZi ;5+9늊(_ʮ}@M43j,t0h]F@ PYy9] )WGnZâ1YҠP$tZGevڔ['2LybUQe s:'T5( ٧:}[ʨbw+e3h!Yd'9(CRYY5(Z߯@05ZRkP9Z^dZY#hP%^A |tVRؼU!%XÕwv?MQIG_?ie}#)O5?,!)@VA5}Z!O%NjC&]?Ro 5n'r3+4Gןܼ`CÅiy63㩡֖Yk}2 JNqlx*7c m0]KL?tc}hY7vyy7Dfxu4>˾|\Cl3y3IHh{$xAC[E|YoP: @5<:3:9ӵ<ȭO.e_"K7#"ydF?)$J"oDij7ӠqaU糰v\ vǢn ?IDګgkg+/[_}rE<ަW &k'b OS~.A<꯼j[ӿVfס^z.1]rGёELC )h4#:mDUxjѴ[Ӗ5[;6272 ?nSnmiPGtھv{Ticjj.$w.˔JJqGzu3®J6*O(7>%ck'c*).hsC\MTM1roaC\9PqIMRM}h1h%|*yĹeJ h~B N px4RҞTf%_Cekrƒ9Ee8(y,wQ 1F{k )fuW d~#){#\GX 4ai)|PQ뚨m:옼pz]v܏> 9Ui͇Q>f2Fw)ȝp |1}#[ 4\_&y#MN2*h0$SjۢygyIjpֆp}%n~ػؙ2HEjd| RwXV26RZ;:N6ń2@چ`Hچ Zy442kJAv@FY!iLc19xn >LF΃2Rvt0@g#>r:pqCv*MkmO^oѻ5%B㦸tGwwdnxݜ2+{ݜB_זG rosCKN3ݜW/'nxoC,qKo Heȝq^1E|WiK.blT\[ُ,Dybr0D& #45Ct7?v>r¾ DW2O3yT)0ŒB2gXPٜJyP> )6n}Bƣ44f2"8G!q/-anBl&|=8Ә[K Җ[SS.n͸\1"udPXCc ODJ1(0?-JK]h[Fnj>PV Eݺ$al˘30K3 |J[Ys+NukF5k!x`?gVBBTDE\Ht# iGO@.ɵMIci_+`!X4 :8 EL6+_:qlEOaj<#$a #I.R6Osxg,~[?Q'($d, z ]Sh xB Q0)Y"S sտ+f+wLBR9k!0/=Dȑ" G,[ǰk-kQ4? fV{\pZ~w2f2 ;Be`!-\`^pn돐>ץqDexNVw |ӽKoM* ȥD ,F ʧjO1Q]?O6[mH)IN@G!U~@L"HpHOWZKyJ^ԾdI.Wf|{LClGma B6/&I(]f°@Hv^ ɕSJeRnH #cy<0w:Lm%'eߥn \U|(Y:Cf29)te 3uƥBNKx*z`B)c`HBb^GHlC¼xHuUx׊byXu!n \n.bRq`xU**pSmo ?J >J* ʼn:J*+9J Mlnj+فUU(JWUaXSḗīJ~hYU@"UT{p{mNa.)U$UPR9)aV}֥+.I 3Hp_n?8 ZI6?zpXeK0r ҄D?߿8Z})GE-p>N=ǣԚ7FW_j3/jpuc!ͮ }ỵY֤#l'Yq= ~O1ϰ_id 7$fu3F/x kL%]/ \ȼgC~I7gQ"֐#s&p!C i{"RiyV-xhq!2M;0 p-H 5ŠKiNE'C혩aތf֎ՠ $LZ_k0W!N:|~jv2ӑ:&9-sQUȪp;G9BM9#Ks]0dpOf9_QP jEbb5O^{R?4y;rZwmnFi9$(<)«`PW>"O A ό'6{r~YoݴKM0H 5PqVg&):<8ߕWs>iu1wJ,v68̿y-ZE6~s](l[raeOq|\of)YwsRK|2D;%@Nؠޓ` xVaC0n T7~:b׹rJ1KW>[>FD yg/BS8a|ՔɇבRO}w'o4pHT`#`9$g-C*!,jM(o[pntgT3UX=tPJJyU+%;nljVr(%"NQX&J7Q}J5eOG#J,R*s[NtgT3{RˡR$Nr(ͩf#J*>P wao>jQE9<+v&J(9պpRʡLJ7Q}JQ(J}kvÖR]j=tQKƥ|(vd\S93J+Tj4}.RFv>[N(k˞]N\K}oY$_'J,Ҝjі7J.R"v CiN5=9JiI.g`A-b T0+yX:up`Vt40IPK)<>B%㔜jq_w/y75%O6S}Juj2uD^r(%}]wCiN5+^xD>r(JUb'PJY9T3\hsD^TD*twRgK3.>1W]2ͯ=ЂeVҳUJ9/L`* Wxe4dDXPo2$PfcfJRWW.H#y^읷 WM`y}v$JwnU ŪRɢ7W~{VNϚJP3*I~S UTJ3LHf(gk4󠘜1]"ǜ*:~}̤\i&kԙۻ:.x͆WG1y1a錆ag35).ÏA3% #mp3xv0/o~:#_Ge,nə<I~iYRmǽCB`͇az^->6ětӫ`SŜLG f؄, XAGm'E2Iý 7(gPS* W/˩]9rʹw 0.*DT"zʬaV!<=ڰ"`U%MӓС^Ľ$A[݅Ug2QեuR?Z aԴ Xؕܵ<V%K ɛ%Z,"/H!7,B0D0%$bE$vD><.T-QGSt羿οCOZNy\xÓ@"}3ӷ#wowV%@&C0aOkE4ZcAA0璪t婲h3XYɂ#cUXa[GMn5}ѲmCSsm9y+{]ĒkaV !mcl0ȫ( Q`euԜYPd`} k岏! ᬸpwu>lx2],wrτPB=|s #UZxLwg|u*\`x`F I{6Xu%5oKf< Cax;=4.?BkTIJJI1wAs(il6O)B%b!a/=yXRNNֆa'Bs#ާfʒ2=ʨƛv*ݣ*,LɅ`DRng)<*LorMJ)g"]{Ny\:wG@_SʙS1`A;GPM'QDw1I"  [Xޓ:;K҂+ᒷ"N{w2wfv6V Ug(b: 6x㿦3;Oag~xytlK8H Ҷ.җVeV*ULv!)Uh<ׁT) 8ȀЊFkGS9쳖G^d8FVx ! $r΃UFcLWOpk rV "ctz>k%-o֖ ULPC($.Ytr? 7EG1s\ +eE;+=8'm%7p0*ƯPP^ iXB܋OgmU#6퇶ܻg?I{&"`1]jul,7_uWuz!QmhHTQB!/y] C/6Aζ +rk=ףt Þb6]DXQ5J~6ϧbyOWS-&Sx?uz.w:"&-E1BNܴ_±GX$/ߎnYH4|\1*[Hd\4k8>n`v_:~_`-=n#7"eݥ; CY,0 2@ YnM˒FaIKdTKwqlXsBiO`<dp43dQ#Kep6k VUIoA:_6ލB-?~kI_&I[ݾ6]|qF)r|#F`=sT^g&gxȿܹ`< ,sb~Xk2k"{wOǍ___[g⛞Cu2)4eG 4dL Tji,=Y_6:g<Y.pLxM Y6 ]HrҬ6_i .?͚wRY(ڐ[J;]pmf]=}r.4k# -hQhg5gLj\5I5$_xUI͠cQvKC"@&OaF߹) 8o4scJ аiho6PK.7dJO-  J[~1MYTp0Ԋ*VBuɻ'PM+C$%8UiVP9^m9RYX JF Noz;]5i4qvzУi[kvu tjXANӸ&by1ԮIȆE+VH?zk&N?ɫ$ή+WI$O.dJ?h7MkRϩZ KtZv;xZ-ڭu}v@B"F#.|Djڭ- rD;hszZ֮8vk?U-Su!!rݒ))S{-X =?Wžeg:.X~wPu,L6DZPzч*c/Gۦ1Jo=SRx!V%.˒RKyBs-d 0"hj!ͣd( uv?Lq',sI'"gZOD[=MI:|1]nf{ .ؒqNpf4^bs[V`tЈ "n:?fjR܈8q=T{e}ǣ'0tьBr;N*B}+ y)AyΌ߰4[I_xY6d}N9_m鹾Zfh IIuMPrz.zVY\~1p.AQ&xd?SE "TۀLKŗ7-l|vh*AOSd=ΐJ\y qq;^ҝșS {ͳxK' vqw-yJ'7b߽}$=1f0/7665L4|kwPKF_nD#mh9A(~NAm-HϤx<\~ÛxV%dE~e-L(`][Z8%'FaK]8S =>9lsvtS<S)։u.yjs5OV}Wܬ Ufz>aކmV{2?]YG=)Z?5+ Hu/U?Hxq>*|:Ư|\eMpY1^ItۯG -EbxkUI${q_S|j"dRݵ>:$8\4CNдudr6[0^U7MhD-.{AS(*kwÃWt'ԺHessþm7@o|/ ȹL9Zlk* 6-Wy\i?vwZ{n51P@4qN Ai}.DKUn-ʚ-^L^>M}8؛rl9,Ⱥ>FZry1uT ( R[#D1Z0qt xQ2e'"1#G%n:DT[ t"99 \,DZDVKʀ"ɴԆhN*jmę$[UM2ڜ@&Z5i6/^_H;LI2L%/-Ĵ=o|ӽv9Lw*~_ Si_=~rt=Bu3{r }gx| w|\+n+*Pۣ[,&~g&Ϸ >[2D͛B2MB]>%E喝cM˓-@^ ؂ QOh0ma&8r9k?K?3pM{pTU_G!>JTN5jNz`3%E׽ʭXe} 2y;1*Z|K>t|> kc2{>+dLtް(^d&E?K;g}mhSZƿ~6$ԖڞuOM[fA/v@~ %nH&b$(L@qעjIdJK3zNpE_лꮌ#^b6CA "9lj(Ch,`cs\戥fk4p,?/%`dhv[ QL7n/?;fC# qdzXlxh0A5 y3t,dc=6@Q7(V(*4¦\Ibs Kj^P+-t|G* .wCMAwZ"v3bQ,`"zp,\kp;צn]b%bU̯[Fx|g.y(ʒH=qQ)Q/ޚ@ǡP<nR;e j"a&Aw&_~Pt}=ր%d E"Bo NXeʋR4@OpC;>ojj(,R`I|R{E2%z, RFٮno Q×0Fyod6&.HgMؿ9P/T9Qe #Eq kihH |d͸B!"\ /`z$r[W\&F&9IRqCE)AA{v@bD fL7݆_k融5I1lĤ*Y%<*jAqmT-mJdžʸDAe'Zw̌4Ԩ.p4uz2.$ig`' |Z]PR+D9609bĨ+ٔ PՊ\\6%E$x#V(1eJ F2J{VfS/o.6$oP35V?v;%c)oWWO AZk@حHf$?iSK^Z[ƹ,W[.pԞU*SLRRΆEO K)sژ.J7I~HI ˂lz2-UeFiֳ*fXObqM6ߊ_u &j8 'Xk!N`d||a+az. h$֐QǑ_~df\)d Kuba|)/;:0(u9 nhBxBI)R#T::ve5!nLtm4>{PεHU 6e֑nn IS 澻ThfazwqB__3}B]wm*]NVjJ?ǽ-8 JZn8q>BaI td ^!]F>$j4tdFfiUiA~)͒vr|7}& 56 */RwCjTļr۵QŔ)'3J[^OE a ]vg<AN;- rT(W1lZ2:DW/5H*/Gy&a6V}΋eEv`i7e(a. =,nfq[s{7?`6/ELl3~aWn}iFtv{=߫MJڭǖڭELI~c&@vK6]kFZתhjHȟ\DT2\8S ͙=%:ǜxBhٷ7dSӉґ28NYJk™?}Vi#Gp#FHkjKVCRT n繺mD]?<&$:#׹/omШ!x_dDOײw"I詢V}f_z۪煳*¹յ3#)pcMOH;;dS !tSUnI&Tφ;e (Nrv  `w}P0~mu̥ÁNFw^tS . X9 P0^F`pܼX57@`q룱F9ZH,& cTecQefiA )ڭI:34)C!th MlMq1cdeC^Izv(i": ΐS t|81iXcH_\ą HK4C*(h"Gq VNI% ;OħG | |w|Q9]*Q@F7F~e q*kH/TY(!PY*'dѴW_wPhNʎH$HH rwzpo(j/쒫ېż}+OS ZHq dS4urZ.V/$p.Fyy(k^ht4Aٸۓ{i*⫛yiV]]IQAJ$)z]19a[/مq K\\-cʛ儛5%A 3"]<W=Gk+ϖc9ݷ`3K2˃.Z9I⤥E :x{3];6>z(I/b+As37|hB8 gMTlIO"?vo0n٬+wf !Idɔe44"Q*M&؅Pjxbj}!,4IipO&tfcgi \VԜܲ$b%]PiIPzL Jl"fݺ@*xPif AUo)F 3烬+a٥1Ti9]U%gk|h$gf*^$4C=XyR,|ϭY_N p$䉌eL\$<1ޔZ1Q2Yu*#oS8BD@ Bd5PR,޺WX֚2knxACG˜c"(h<ܨG\m _eeJ|pi}6%ǚXLQH0<)JH̦/yq:luͨ$ꈺ.Kw+ٮW%˻}"dFZKG"%;)%sk*ly,bπ^,/a)ɺ@DRMfjA Acj"V\AF@: &j'[m+oNϥhdXו좁HC6 Ƃcsh@l<8.,#&pYϨ=.ӧvݗ)Q&CQ&f֟ VZC"vm9R#!hF c@u r-0=BW\QErm'?A0( Fh k]|~}~Y| ÆKyT)0kmV>&d  "=O ^=eG88}/m[ݲ|փ\_YVjJajp sF]>* f󆴝9G=y&ђ/:0tW 1 dܳ5Uyj<ؘRI{<"'<((@sFCoVysx]4uQ]])4V{ S>Wɟ$[3IͶ}`0kCkz {˂`ږ&{c=@??i{hIycO6r!ԗ9^B{Qb%`'C[G{+@5jf|6N :Wag$^r=_]TڲQ[2$d񜤞V8R섽;j,dB"pIF`ѰaW kubAdժDN>1dyzʅx%'{2w? ~W6d$B0T=8 {-cQBU&:ӡ֨H׍΅Կ'A"ii +ru|1'm/;vZ{hHE_? !WZ'K` dVod!Lv~õNm_ŮbhG8&h$5 {A}s0`7Nvuu Nc"!_'u$ MkA5;TjIwںc-]:}:-HפRMߑ2Yy6*I.k'wNfxBw}z93m(Sf-8F8lg*t"1TTuWOM2Z2U/ϐCy%!폣Jg'rC[7gt! wah,+ˎ9MC \KLgMxC\|=qA[D ox9=[I"oK?,I9O*LTUے ^'宎rMR3|R5IG?2E͹W%V1YqB-aA>翁_}< G!l`#v<{jc Z@ jg} B y+3koy~2$_{kunׯSOw#ZoIY*1'3)$"?L<`=ɷ3;&Tv0H؍ qcHh:SUPEcRڿ_m>O~T]Z'w..koΆ3U'#ZOaQn#m-BSc",ls.Ko!dȍG9VsQ?I%2Ih-!h0H)i*f!B9[Oc`IAhV^El>$0uh)x u+1T~6LmN5tNp6({yM7\ <8HPԸ4M/dtzx/ }Ur?huЕԆ/s6齯+ `鎹$ɌWsw'" ίÇ'Ml=݁>TB_uk/P|ͨzcxZ|%H!֚YRouE,(2K o8}'u/kNxoA(mUB'ݺږneҭ{W9KUChe!wѺ(!w.lz} k5u>PR8-&Z5[ |8wOu5Iau܄W}ϏVseTx1;,2Oj ִ7q~E+戊K%~Sm< ( &թ2\EbӃS/zGy^~W3Fߖw[/zptJ8.6%jǵp's&XWlƅg#Ҙ"^$^K҇NtL|}m+q8ӂwd, vMAO6620bэIZK[WB-)"qZ&-bIrFq諃QzZ|Y\06Q[عt" <60icC4ߟ<e0`#l :.eZ]y/ThZ6/ί­MJ!8ZZb%*<f(dͅ1bZȷ*CfGpɲ/!bh &vb37(뎀;v Fj4 lb/Vt$rmm3mBIjgb (\9aH!bak=2s`I5h RV"3UH43%}&U}\A#۫*NZOڄR[-.ӣK9J9fyluxYߵ_|˧t3ֶǭQwiր.FU02zhXZghҫbTF8<`JJ&~߻< X^娉"& +&LN9Ǖ']1 -4n.~A+AYnC$頓ED";n Bx j8ʔLDZ_K%s/^K=MSOb\ V8 JYH\,d+#Z~EYk#$!sRzʯ|2_?^)%պ__y(}^;=Iz;Y<0w|w/V4ϫOvvWd\sN~Oߑ/7lٌѧ~qw;3XʢxC|yI nRdyK}⇿(%z rÿ[T0&ke_.Ex"{q=7xݤHk)]qiDpOn2T)R'Ewi9j<\I'`L$p-)JFױI3zy^^| 2Pr$yX$ݮ9kkD>( DU3.ř;!>F1Ldf=KrdMС;)8-`BhHW$it2LHJ@)iEFc.!op%*z3!U f}`C"G:r4g&! mw.([6Mn'C7:=\˷[U/wrع3q4F֋F{ƑB_nsHKnn_ #>~դLd1g$$6E]]-]ҿ7yWᙕp Wgy_Ѿfm %fvkD^+Mݚ&K.$9bTTC Lzx{!<+ca,d'K/ d:_FK1[nn+pGN$hRtSh"3Ĉ ;x&s3Ak:m}x ݸ-O?K;Rl^V3$\1"=3IK=m+N%3R"k> `J'\aȍ+.s׊\*Xhkp7i2ɤ韷ku-χdL2kµsDHĀSK1LTن T@wjC!nViMn$ªU.BJ`//wi ߏ~xo'r0rq룱F9'R'K)1)l}&zܽ4Ϗʒ*j~q5/]7:G[G wэC[KpMrwXټ(!&i&_j o &rvrO}|!W;j4/5wGGcBw9:Mn 5UF@KlMdl[M]7NSiV~oc-R| Hv44 dJ9wu_ZysYiJAv@FY!iLc19x.CqaZ@Me,D;J%}ՔƍvdeV.,jGq0Us?ڑAS-wnjI*:7e0Z:7f]:7$wnjR)(M`1AMP jtnj]ޗ?ImgFv临Hc@/3θD* _-j'IAǷlTY~" !2' Fh $ Vk/Ѳr#@j ^r[z:FO@TAy%3YG9HfJ=3S)ml w&rOK`^ݞ 'H -]V0%69Yg z_M'~iʶš&Ę#2ht|'AHRzń;$Δ4S;h-.YnVP4 ₏',EM18RV$ <,ؤ3i5LvqfKe4Ϥ "QoL&Y ᙋ$ 9BR9BAGs2:VCC]E, C,͌(!#УOռzGVZ y,`n/1~|)W3ރxzke-5 g f1F̣sV1Qp< 7=`=5AÐr)c."UzADQvKaz֟yW, ^(io3Sn-t H ҝƤ.seuV˞ω^`LUYA cBLY8hC:R C-(P\ =*h&{ >Zq/Гth7d()0}cScV \smVɚU9||cXT,jBNbª<63B4>`qlUzh&E8OH$7^ZT:€7_NPKT.K">?.@ir|`tpy]ɗRM"1R!QDPA>ET i}GO?xkTyM ˁ#:L͗CӰ qa H2X6WG=‡C N78j [ S}+n\R[8݂p @ ɠ5 xF}j}1cϊqG .Byew/gqPDt6˛7"ìH<eIb5PIrSBrfUy_LJSsztSPdhp`m ^swkL6\?n # J)ը<֠-l|!9gGVwֲ΋O-d{Lh6Cund5H&:q6tt4 Jښ0̉s1ePWB&m[Ei,ZAi9јk ,ÈՃ#.lqpd(ILU0+3R#++5&Din2C;z ۊ(0QMQ5URs"JytpRXNJQoP[xxkS}.p`Ɲ$\M(>u - k^LVYUû"lCb0z4/G7[-B_fjD?.K) 6J MzQ5 rj!'U*ٍrT):w'ZR{h<.S|5Dð5Q\ Տp)ӁThKӻe% n^Hڃc cM^lp?y͐-ӪKTBoeȂU3CpFA[ux[}5~{Ekg4XV:V@$tLL`$KK qm1e4 "h+wGF*xVf=poMۨ}ߪ&w2o5lI +NHUxj JI髖R)IEJ2;!V᪥M|(4Re2;!VUZ뛔>)X#_n^Ԋ7)}R  X\^Y=lzUt|.FΎy'J|Qu`;ʘڮ2uXpP.Òѕb̍J ~roqM\Pr ^.3(2E@dZ㼼}ק* /P|fRg>>9"|#k, M)ȷZ ("`&Ool xM&pu 4pIM10jgtuu_]ܪZgUB40L—M'nS—#UU)O91_Ƃ_ˏ>, -Jg=B齔OOFWӻUThztQMK [!P"2IdZt8ERd\,JX/HUXT{E״w|z(CDW1sz>e<"fe߬ZSt˛7֔6T[٫b"pq\^Q8ŢgMCLR~̮H K{%cڿ@{>hi:櫷IDw2I7 1Ci/7_IQz΃A 8(l*ko̲2hX[FmIP?_Ҩ 5SE[Sа>`5Bo Y٤it9Ƚ6EBv"HsT ɇ5^= ~MJH]2%nB4x>j<? gyq,6\flap0掅wmHW~۝2#z^k㞧1ya{V-5tKfTVeLTe`0 #`/Xu3tJR7mT֔+eAͬQwTf/w4#F"\8E/ށ0{O_^]xy ̼a0C<BN} kя^`w7\?bV~o-6Jcl \6=o닿\_⻺#~7Ճ{2A[;5ɵ_C}BeXJ2liwZl 0t*J ݮLݔv\Z7e}3B?$ hCYHp/b\iygiZ+8M(;( :SHV@5]Fpz0soQF+LOKM>xO4tTг_Ϸe =QN.--5V}AL.R=$w8yޚ򢺽|ݑ{.zjZ~)ߛk U{47]H: ˻Hֿw\HtwT?$هMV]Rt..7|>IcG{Pm 9qM)dj+߮wk&3zT BL'1mhO%r&dSex`&`-a¬<=f+o RJȖ`lKMpңҰqM{+ #bJ͕:?n+%f$/%X/=$e[jbrfCR¬ksKJ[iT%Ѕ-5 }maV F O6jpdy aa/¢RzRVO n3~!DrT=$S)}G-|~o')E92͚ !뚠dz Zmr%/I)C6cLY1Ȳ c*S^Uih)q\{!~fD㣨/ D]#RTw{#VJ!4xp>MyIԍ@ qa/B qk`tD$=p \6fJ5 4R1fTXEThs"Pߚ@ RHˈyYB5%N=KГ= CiÝ<5֒ڌ*Y)lNRg+QXV 2E#&j:x6Ta\FnTuS<ȥ)X%KθU͊ V0[uu`6FA`٘fیd u)-ԗm @,_ZQױf0"sàԆ Tęr<+eQ@wK%Rι erqgGHR 6E7,?\iB{Ђ,"b8hQ;QYW 7;sL¸,6DZIvާp#x2=/Uq kMb l% 0Y8sR給c A`.ef\90#Hoc`|\"%G6wD.\Hrf>%{s- 1Yv9W<΋ÙLL ǹpjۙ,,'~1a ҨBT=~q T}et4pl|f.Gby> D\n\#7`4hX3%vwvEB+8Nz=C`{Hk8\>þAճKIvo4jȜ0n0޳gkrQo7fx#nl3i66߈4;Wu:5LNi6= 4;&Ww[*!6_tJ'nm 9qMFԈs׻)T BL'1m,d-rFwBND{۔$% lK@@bY{^Zĭ l=Zu e܍wcjVzV*L %,!/RVzVB?%-JJQY)꺳Bԗmݷ׏J9 RjE0-5G_+>++~-n6b?[Y~˂ asޗGZa/RJJux.p\)o~L찤mI{]=hDk?DbT"Z) g{h9Z&D+2SGbHĄ=2;@ 3}k ?q%}Q+ d\ eҹ,VQ(tia6n|C[Sls,%!HEV9JEeOTfhf'H850raDu;\+zi9H;MF\>31IjI)a[/m8ɮ$„fLtvH`Cn%g% efYeZR̀(TURVtuꈅ;f AԘ!STp4YYk,,Qf9؊ *+QTFi U1jC  %2 0M1kC<U([aeaEnp_Db̋rC Đ֌ϗ%ۗEA ǢԙΗ nC"@f#rM:B іB Seq !ͳyLT(J]e)ĨPNJ@k)K|`EPJ %EcHRi@kf$ZTt*@qJ켏WF"GF>VR'}\)ҭS ђ#<&Yˏ(JF0S0weq@ڎ``2ѨGQR吏o:f>yC{D Pۻ͔Ω?y?ÁywKpDS7oc6nE|ϭgQиP|2~0S -j4F40NM, ѪL3r$ݜ*QEPeZRkɂ7 a sk;$Ga')r'ThkGvDMG$q.!&lq =< L|!]Uf߼ca0tT}DֳoI)jshuhDAnCJEHGeɥ%;?|:#9썓\6[jH,_Fb:1 [NnȞ36dBNDlʈůѻMEPbb:߈n]^Okr&dSqx7XD얊A~#ƻ͘E8NH~,л a!'nmJ & ۭ;Dd:-0ϭ7A6(csU)Yqveەʍ$ЙJZn9SU^dƥhj9!wfM] !7(WfZɟ?O_>_ݷv({Y*GЪg*v5a[6>@w_:ϧT=;6[HC$䀛cFC&hfgvT3W)4VªJZs̭ znL ]$e}XDj82 =YHJSY*QG=Tq(d QU!hT=ުr\ ql.]%"g⵰n[ph}FA34A %xNzcbϕO/pJA@!H-9цLZ0"2 2pWn %:W=NaHEBv0S8jIxx#nTxޕAaB7bλ[/vQw%4k 9qM;E*%bTZHS9wK?! UBND[6%mJizi~ЎϞ|`o1n %ƣzlh%ocdlJq#n ^}ŢOD"أQx49zWJUr[0^%ϡCأ5(-jZ͖H&%041{` =Rܪf$S4m]`;#D3 ')oKhg~YH+UrU.L2TٜY2U7Z$Q;(6 IB>J5myeiyԳAjzS8%7qYf(9OyymSTtͧVK=Ydq92-yЈ0p 9<-|愜מѐ%"9Ħ2f$F6#JTyNVi%0'RE0-u/Ó۸hI AнHP r&1Bx$P `Q)ˢR! P9#~)g6PTI( ӸZ?Y s |ۧ^{r|ʮUpuƓA~Swn"PLfI_eYi wQ9r*p:P "2W A:HPQL*w.юe rPqds9w;8܊T jWo`b.rhrJIeE$sr/"ǒ.p$2%c'݃>F]k5f?s؜ qO+BnzN/k _KE>&Rwr .7RwHPLÞ9%J }|޽/U V@9ǓNVR%OW:r,/ 0$pJT#`8DQv1S`Kb hT\=6ELB{,sqrhq^-/m8l;1~1 feDc*&'3!ZO[g̒a {0KFjK3tM{|pӞӓkqC4Өf~.l%N1x#rt4 "M2@I 1ApzNU,͓\7\igD$s 1 }B`C8v+m}➑4zvay ]:(_ JfR#b: Бod&qj`zKUI4קsCĒt[mwV+6:X(`"3^N*u1Rg\9D?&`{AL$36B(L 5g3)O"Vstj8⿈˵CD`q\9Daʒ~$(X@Ѯ|N@ ( jw*!(jKI+2{27&Dl2!>ICcmzddu (~H>4+$kikE_fE=7¬P*N'`3NSDs:mKbj_lWu.1NW4; 13441ms (15:47:26.956) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[1225722294]: [13.441452172s] [13.441452172s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.957063 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.957837 4760 trace.go:236] Trace[1623270320]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:47:13.751) (total time: 13206ms): Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[1623270320]: ---"Objects listed" error: 13206ms (15:47:26.957) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[1623270320]: [13.206396503s] [13.206396503s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.957864 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958314 4760 trace.go:236] Trace[867814865]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:47:13.227) (total time: 13731ms): Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[867814865]: ---"Objects listed" error: 13730ms (15:47:26.958) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[867814865]: [13.731002572s] [13.731002572s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958375 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958581 4760 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958785 4760 trace.go:236] Trace[886517129]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 15:47:13.348) (total time: 13610ms): Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[886517129]: ---"Objects listed" error: 13610ms (15:47:26.958) Jan 21 15:47:26 crc kubenswrapper[4760]: Trace[886517129]: [13.610513886s] [13.610513886s] END Jan 21 15:47:26 crc kubenswrapper[4760]: I0121 15:47:26.958803 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.301693 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.306888 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.565521 4760 apiserver.go:52] "Watching apiserver" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.567692 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568041 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568427 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568459 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.568529 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.568907 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.568952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.569142 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.569175 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.569202 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.570927 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.570984 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.571002 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.572490 4760 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.572914 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573037 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.572979 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573570 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573573 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.573985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.582663 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:00:06.380960972 +0000 UTC Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.592690 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.608058 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.618199 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.628786 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.641008 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.656095 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662101 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662272 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662381 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662437 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662512 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662537 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662561 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662582 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.662622 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.162588664 +0000 UTC m=+18.830358282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662720 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662768 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662793 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662814 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662835 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662878 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662901 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662922 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662946 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662971 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.662991 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663034 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663057 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663081 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663103 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663124 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663147 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663173 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663243 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663310 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663360 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663387 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663412 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663436 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663466 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663489 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663156 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663473 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663495 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663522 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663665 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663774 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663805 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663824 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663879 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663965 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663974 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664047 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664164 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664204 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664240 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664292 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664337 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.663534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664396 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664417 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664458 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664475 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664482 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664492 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664563 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664563 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664580 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664618 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664638 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664643 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664711 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664726 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664732 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664758 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664790 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664804 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664885 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664910 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664922 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664933 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664978 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665023 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665042 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665094 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665117 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665160 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665181 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665202 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665226 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665255 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665278 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665301 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.664976 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665028 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665070 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665097 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665223 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665237 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665267 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665296 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665476 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665491 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665721 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665791 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665815 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665838 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665915 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665939 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.665986 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666008 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666014 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666029 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666088 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666099 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666125 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666150 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666197 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666223 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666258 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666264 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666304 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666406 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666430 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666501 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666525 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666547 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666573 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666616 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666639 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666686 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666709 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666731 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666783 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666807 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666847 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666867 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666931 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666952 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666974 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667017 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667087 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667109 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667131 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667153 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667182 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667206 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667233 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667256 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667276 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667296 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667318 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667359 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667382 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667417 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667441 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667463 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667483 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667504 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667526 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667548 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667569 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667591 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667614 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667637 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667658 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667683 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667756 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667779 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667831 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667855 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667876 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667917 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667938 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667982 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668004 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668027 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668048 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668091 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668112 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668134 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668177 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668199 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668221 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668262 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668285 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668580 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668659 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668682 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668704 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668725 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668747 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668939 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.668992 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669060 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669095 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669222 4760 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669237 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669249 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669262 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669275 4760 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669290 4760 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669302 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669314 4760 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669342 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669358 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669371 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669383 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669394 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669408 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669422 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669434 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669446 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669458 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669471 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669483 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669496 4760 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669510 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669522 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669535 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669547 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669558 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669571 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669584 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669596 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669611 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669645 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669658 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669672 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669685 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669697 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669710 4760 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669722 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669736 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669748 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669760 4760 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669773 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669786 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669798 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669811 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669823 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669836 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669849 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669873 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669885 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669896 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680278 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666274 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666555 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666757 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.666804 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667219 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667761 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.667920 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.669918 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670431 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670433 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670475 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670658 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.670757 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671233 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671687 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671726 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.671740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672011 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672016 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672231 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672504 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.672637 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.675448 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.675699 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.675870 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676036 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676423 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676432 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676447 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.676662 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.677271 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.677638 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.677866 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.678978 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679189 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679353 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679396 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679509 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679777 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.679964 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681006 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.680973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681175 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681182 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681307 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681373 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681670 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.681787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.682277 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.682528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.683710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.683798 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.683874 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.183854111 +0000 UTC m=+18.851623689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.683964 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.684372 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.684753 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.686558 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.686749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.687422 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.687839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688013 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688212 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.688793 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.691689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.692031 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.692727 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.694316 4760 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.709952 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710081 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710413 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710631 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710869 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.710913 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711229 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711439 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711460 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711578 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711891 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.711979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712193 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712457 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712527 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.712872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.713369 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.713594 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.714097 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.714280 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715340 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.715522 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715534 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.715651 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.215621744 +0000 UTC m=+18.883391332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715697 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.715809 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.716170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.717067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.717454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.722365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.722474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.725300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.731469 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732154 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.732451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733436 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733473 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733496 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.733587 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.233556472 +0000 UTC m=+18.901326060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.737368 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.737931 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.737954 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.737970 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.738019 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:28.237999403 +0000 UTC m=+18.905768981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744675 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744761 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.744980 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.745049 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.745155 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.746376 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.747729 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751242 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751634 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.751767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.754932 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.756430 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.756674 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0" exitCode=255 Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.757344 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.763048 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.763598 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.763876 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.764803 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.764924 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765083 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765087 4760 scope.go:117] "RemoveContainer" containerID="fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765147 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.765613 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.766597 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.766824 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.767404 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770547 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770665 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770681 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770694 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770708 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770720 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770732 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770744 4760 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770755 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770767 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770781 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770794 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770806 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770817 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770827 4760 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770838 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770849 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770861 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770872 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770884 4760 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770897 4760 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770909 4760 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770929 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770941 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770953 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770964 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770976 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.770988 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771000 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771011 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771026 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771038 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771050 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771063 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771076 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771088 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771100 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771111 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771121 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771131 4760 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771148 4760 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771159 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771171 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771183 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771194 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771205 4760 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771215 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771225 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771235 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771246 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771255 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771264 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771274 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771285 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771296 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771308 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771339 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771352 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771364 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771376 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771387 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771401 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771412 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771424 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771436 4760 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771450 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771461 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771474 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771485 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771498 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771510 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771521 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771534 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771545 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771557 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771568 4760 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771580 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771590 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771604 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771615 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771628 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771638 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771648 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771659 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771669 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771678 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771687 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771696 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771706 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771717 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771728 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771738 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771749 4760 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771762 4760 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771773 4760 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771783 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771795 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771806 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771817 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771829 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771840 4760 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771851 4760 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771863 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771874 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771887 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771898 4760 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771909 4760 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771932 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771943 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771954 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771965 4760 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771976 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771987 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.771999 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772013 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772025 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772037 4760 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772050 4760 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772061 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772072 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772083 4760 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772094 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772105 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772116 4760 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772127 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772137 4760 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772150 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772162 4760 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772174 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772185 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772197 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772208 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772220 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772233 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772245 4760 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772257 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772270 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772361 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.772473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.781302 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.796387 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.798120 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.805103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.806034 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.811992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.831290 4760 csr.go:261] certificate signing request csr-r765d is approved, waiting to be issued Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.831566 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.837215 4760 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.837296 4760 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.842590 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.844712 4760 csr.go:257] certificate signing request csr-r765d is issued Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.856669 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.857151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872870 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872910 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872922 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872934 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872946 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.872957 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.881584 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.891512 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.901692 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 15:47:27 crc kubenswrapper[4760]: W0121 15:47:27.912647 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64 WatchSource:0}: Error finding container a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64: Status 404 returned error can't find the container with id a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64 Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.919739 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.928430 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.943593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.953743 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963112 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963126 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.963102 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.977025 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.984059 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.987304 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:27Z","lastTransitionTime":"2026-01-21T15:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:27 crc kubenswrapper[4760]: I0121 15:47:27.996846 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:27 crc kubenswrapper[4760]: E0121 15:47:27.998046 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.010943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.011058 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.028477 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.028654 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.041237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.144219 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.178438 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.178605 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.178587816 +0000 UTC m=+19.846357394 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.246757 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.279944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.280015 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.280041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.280064 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280185 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280248 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280230985 +0000 UTC m=+19.948000563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280715 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280739 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280752 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280777 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280770416 +0000 UTC m=+19.948539994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280816 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280847 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280838697 +0000 UTC m=+19.948608275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280901 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280914 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280924 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.280951 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:29.280943479 +0000 UTC m=+19.948713047 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.349714 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.356004 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4g84s"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.356506 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.361866 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.362181 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.363431 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dx99k"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.363813 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.367209 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.371717 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.371982 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5lp9r"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372334 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372511 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.372865 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.376604 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.379658 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382034 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382491 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382608 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.382854 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.386719 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.424806 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.439491 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452694 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.452717 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.457309 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.472461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481156 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-socket-dir-parent\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dd365e7-570c-4130-a299-30e376624ce2-rootfs\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481228 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-cni-binary-copy\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481245 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-multus\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481263 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481293 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-bin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f42p\" (UniqueName: \"kubernetes.io/projected/40eabf28-9fbd-41ef-a858-de7ece013f68-kube-api-access-7f42p\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dd365e7-570c-4130-a299-30e376624ce2-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481403 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-netns\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481417 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-daemon-config\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-multus-certs\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-hostroot\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481464 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-conf-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-etc-kubernetes\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481509 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-kubelet\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481528 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6m44\" (UniqueName: \"kubernetes.io/projected/7300c51f-415f-4696-bda1-a9e79ae5704a-kube-api-access-v6m44\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481555 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-system-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-os-release\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dd365e7-570c-4130-a299-30e376624ce2-proxy-tls\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjbt\" (UniqueName: \"kubernetes.io/projected/5dd365e7-570c-4130-a299-30e376624ce2-kube-api-access-kxjbt\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-cnibin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481794 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-k8s-cni-cncf-io\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.481816 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40eabf28-9fbd-41ef-a858-de7ece013f68-hosts-file\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.485274 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.495770 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.509235 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.519007 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.541367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.555950 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.580084 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582545 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-hostroot\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-hostroot\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582800 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:36:07.198231264 +0000 UTC Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.582906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-conf-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-conf-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-daemon-config\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583464 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-multus-certs\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583572 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-multus-certs\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-etc-kubernetes\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-system-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583655 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-os-release\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-kubelet\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583761 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-kubelet\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-os-release\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583825 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-system-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.583868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6m44\" (UniqueName: \"kubernetes.io/projected/7300c51f-415f-4696-bda1-a9e79ae5704a-kube-api-access-v6m44\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584024 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-etc-kubernetes\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584028 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-cnibin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-daemon-config\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584267 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-cnibin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584432 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-k8s-cni-cncf-io\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584531 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-k8s-cni-cncf-io\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584539 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40eabf28-9fbd-41ef-a858-de7ece013f68-hosts-file\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584597 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dd365e7-570c-4130-a299-30e376624ce2-proxy-tls\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxjbt\" (UniqueName: \"kubernetes.io/projected/5dd365e7-570c-4130-a299-30e376624ce2-kube-api-access-kxjbt\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dd365e7-570c-4130-a299-30e376624ce2-rootfs\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-cni-binary-copy\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-socket-dir-parent\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584753 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5dd365e7-570c-4130-a299-30e376624ce2-rootfs\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584766 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-multus\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584801 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-multus\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-bin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f42p\" (UniqueName: \"kubernetes.io/projected/40eabf28-9fbd-41ef-a858-de7ece013f68-kube-api-access-7f42p\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dd365e7-570c-4130-a299-30e376624ce2-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584879 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-netns\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-var-lib-cni-bin\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.584951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-socket-dir-parent\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585175 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-host-run-netns\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585229 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7300c51f-415f-4696-bda1-a9e79ae5704a-multus-cni-dir\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585378 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40eabf28-9fbd-41ef-a858-de7ece013f68-hosts-file\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7300c51f-415f-4696-bda1-a9e79ae5704a-cni-binary-copy\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.585715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5dd365e7-570c-4130-a299-30e376624ce2-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.595515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5dd365e7-570c-4130-a299-30e376624ce2-proxy-tls\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.604802 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.615315 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f42p\" (UniqueName: \"kubernetes.io/projected/40eabf28-9fbd-41ef-a858-de7ece013f68-kube-api-access-7f42p\") pod \"node-resolver-4g84s\" (UID: \"40eabf28-9fbd-41ef-a858-de7ece013f68\") " pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.615712 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxjbt\" (UniqueName: \"kubernetes.io/projected/5dd365e7-570c-4130-a299-30e376624ce2-kube-api-access-kxjbt\") pod \"machine-config-daemon-5lp9r\" (UID: \"5dd365e7-570c-4130-a299-30e376624ce2\") " pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.618715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6m44\" (UniqueName: \"kubernetes.io/projected/7300c51f-415f-4696-bda1-a9e79ae5704a-kube-api-access-v6m44\") pod \"multus-dx99k\" (UID: \"7300c51f-415f-4696-bda1-a9e79ae5704a\") " pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.622172 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:28 crc kubenswrapper[4760]: E0121 15:47:28.622482 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.633462 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.658880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.659512 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.662156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.671249 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4g84s" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.680384 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dx99k" Jan 21 15:47:28 crc kubenswrapper[4760]: W0121 15:47:28.683727 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40eabf28_9fbd_41ef_a858_de7ece013f68.slice/crio-121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085 WatchSource:0}: Error finding container 121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085: Status 404 returned error can't find the container with id 121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085 Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.685891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.686395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:47:28 crc kubenswrapper[4760]: W0121 15:47:28.710038 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd365e7_570c_4130_a299_30e376624ce2.slice/crio-5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec WatchSource:0}: Error finding container 5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec: Status 404 returned error can't find the container with id 5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.735718 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761770 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"2ab780788c3a5edb88decc3033136803a37216432dd6f9627cc073c4438f9a25"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.761947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.769940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"83016e41cddb8705205604c2f1f0c38956f2183dd058dc225c6cd56ebccace57"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.773396 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.777951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.778843 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.790735 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.803110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"5906cacabace39c071848a064805c678c2fdfb009c977b68aeb15486e16ef3ec"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.814044 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4g84s" event={"ID":"40eabf28-9fbd-41ef-a858-de7ece013f68","Type":"ContainerStarted","Data":"121c4114ff2059451ac6f78aa648bb13b292ba9fc5ce7462cc2b9a4ae9d85085"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.827552 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.827614 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.827628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4126b7da1fcb60ea0d84a296c45ccd978230872807bc34c651b534f6a2becd71"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.831404 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lkblz"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.834610 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.834943 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.835963 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.853904 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.854415 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.854621 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.854883 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855101 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855277 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855575 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855746 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.855979 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.856429 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 15:42:27 +0000 UTC, rotation deadline is 2026-12-05 04:40:11.504399734 +0000 UTC Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.856542 4760 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7620h52m42.647861568s for next certificate rotation Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.857073 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.857170 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a172cdfc35e07366121d54b44d9b4db5ed4734bba516fc0ab1280ccea2a4da64"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.857727 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.875928 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.909654 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.936130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.937797 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.955275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.955258 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.959895 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.976516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.983229 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:28Z","lastTransitionTime":"2026-01-21T15:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993001 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993042 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993067 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993087 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993107 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993175 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993282 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-os-release\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993369 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993412 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-system-cni-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5667k\" (UniqueName: \"kubernetes.io/projected/bd3c6c18-f174-4022-96c5-5892413c76fd-kube-api-access-5667k\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993724 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-cnibin\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993806 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993850 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.993867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:28 crc kubenswrapper[4760]: I0121 15:47:28.992940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.010399 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.031457 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.060334 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.080262 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.085898 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-cnibin\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095282 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095302 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095342 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095411 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095432 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095451 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095487 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095503 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095531 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095549 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095589 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-os-release\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-system-cni-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5667k\" (UniqueName: \"kubernetes.io/projected/bd3c6c18-f174-4022-96c5-5892413c76fd-kube-api-access-5667k\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095787 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.095946 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096419 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096431 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096443 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-system-cni-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096613 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-os-release\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096729 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096759 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-cnibin\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.096985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097130 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097307 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097519 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bd3c6c18-f174-4022-96c5-5892413c76fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097666 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/bd3c6c18-f174-4022-96c5-5892413c76fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.097786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.099394 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.101561 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.116095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5667k\" (UniqueName: \"kubernetes.io/projected/bd3c6c18-f174-4022-96c5-5892413c76fd-kube-api-access-5667k\") pod \"multus-additional-cni-plugins-lkblz\" (UID: \"bd3c6c18-f174-4022-96c5-5892413c76fd\") " pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.118742 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"ovnkube-node-gfprm\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.123254 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.136503 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.148316 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.162912 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.176540 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189010 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.189113 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.193798 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.196892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.197202 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.197170005 +0000 UTC m=+21.864939593 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.206058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lkblz" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.217788 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.221403 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd3c6c18_f174_4022_96c5_5892413c76fd.slice/crio-5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44 WatchSource:0}: Error finding container 5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44: Status 404 returned error can't find the container with id 5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44 Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.232609 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.247311 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.278699 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.291356 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298090 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298192 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.298211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298283 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298291 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298386 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298344814 +0000 UTC m=+21.966114402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298406 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298397855 +0000 UTC m=+21.966167433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298480 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298530 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298544 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298551 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298608 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298617 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298589689 +0000 UTC m=+21.966359267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298634 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.298724 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:31.298699441 +0000 UTC m=+21.966469189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.301216 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.325029 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.351474 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.381179 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.396748 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.427741 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.456796 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.475289 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.492053 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.499973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.503039 4760 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.503442 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/pods/multus-dx99k/status\": read tcp 38.129.56.65:55356->38.129.56.65:6443: use of closed network connection" Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503900 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503922 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503964 4760 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503990 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.503998 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504021 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504106 4760 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504150 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: W0121 15:47:29.504356 4760 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.527589 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.583143 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:50:56.682313515 +0000 UTC Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.602286 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.624533 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.624657 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.624831 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:29 crc kubenswrapper[4760]: E0121 15:47:29.624996 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.626845 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.627534 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.628597 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.629215 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.630316 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.631001 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.631800 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.633058 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.633913 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.635177 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.635858 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.637288 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.638158 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.638865 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.640204 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.640885 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.642286 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.642848 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.643592 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.644971 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.645604 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.646904 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.647484 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.648815 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.649444 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.650271 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.651006 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.653236 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.653953 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.654840 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.655496 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.656058 4760 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.656185 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.659036 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.660368 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.661116 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.663024 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.664228 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.664913 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.665805 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.667017 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.667898 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.669306 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.670413 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.671915 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.672927 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.677024 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.681715 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.682561 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.683860 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.684488 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.684987 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.685628 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.686242 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.686855 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.687909 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.688459 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.698400 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.704571 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.719961 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.733261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.752043 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.776486 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.789720 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.802375 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808191 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.808294 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.819303 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.836036 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.854502 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.859254 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"b5fa27d025848e094ee9fbae80d0d1dc50a2e3a8dd42089183368ae4f1396adf"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.861458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.862594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4g84s" event={"ID":"40eabf28-9fbd-41ef-a858-de7ece013f68","Type":"ContainerStarted","Data":"826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.863811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerStarted","Data":"5e101a32147a292dc075779500fef49eb1415e24f452da83287627f6eb7e9b44"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.866191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.866226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.873184 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.892837 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.909123 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.913992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.914001 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:29Z","lastTransitionTime":"2026-01-21T15:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.935702 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:29 crc kubenswrapper[4760]: I0121 15:47:29.976682 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.010385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.021882 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.065450 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.097774 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.124683 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.126121 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.149738 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.173889 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.189132 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.206009 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.218674 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.227434 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.239676 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.317522 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.330253 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.345826 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.359536 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.366568 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.421581 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433147 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.433189 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.456535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-nqxc7"] Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.456999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459000 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459137 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459241 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.459433 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.469884 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.482985 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.492995 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.509177 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.535841 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.544892 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.574704 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.583858 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:06:00.780425586 +0000 UTC Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.607897 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.612923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-476wz\" (UniqueName: \"kubernetes.io/projected/4ad0b627-e961-4ca1-9d20-35844f88fac1-kube-api-access-476wz\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.613034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4ad0b627-e961-4ca1-9d20-35844f88fac1-host\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.613116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4ad0b627-e961-4ca1-9d20-35844f88fac1-serviceca\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.622063 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:30 crc kubenswrapper[4760]: E0121 15:47:30.622251 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.634446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638619 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.638631 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.677464 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4ad0b627-e961-4ca1-9d20-35844f88fac1-host\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714408 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4ad0b627-e961-4ca1-9d20-35844f88fac1-serviceca\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714470 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-476wz\" (UniqueName: \"kubernetes.io/projected/4ad0b627-e961-4ca1-9d20-35844f88fac1-kube-api-access-476wz\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.714523 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4ad0b627-e961-4ca1-9d20-35844f88fac1-host\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.715497 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4ad0b627-e961-4ca1-9d20-35844f88fac1-serviceca\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.720317 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.727621 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.740904 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.765871 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-476wz\" (UniqueName: \"kubernetes.io/projected/4ad0b627-e961-4ca1-9d20-35844f88fac1-kube-api-access-476wz\") pod \"node-ca-nqxc7\" (UID: \"4ad0b627-e961-4ca1-9d20-35844f88fac1\") " pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.768232 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-nqxc7" Jan 21 15:47:30 crc kubenswrapper[4760]: W0121 15:47:30.780717 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad0b627_e961_4ca1_9d20_35844f88fac1.slice/crio-ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66 WatchSource:0}: Error finding container ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66: Status 404 returned error can't find the container with id ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66 Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.797887 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.835871 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.844242 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.872129 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f" exitCode=0 Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.872253 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.873841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nqxc7" event={"ID":"4ad0b627-e961-4ca1-9d20-35844f88fac1","Type":"ContainerStarted","Data":"ebcd046aaab88daeaf5c2301bb06512027283af4a60d221934e737d27b607c66"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.874897 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.882585 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" exitCode=0 Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.883375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.888076 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.918515 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.947093 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:30Z","lastTransitionTime":"2026-01-21T15:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.962216 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:30 crc kubenswrapper[4760]: I0121 15:47:30.996026 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.035758 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.048744 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.066747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.102563 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.107725 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151336 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.151368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.163723 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.205696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.219307 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.219593 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.21955545 +0000 UTC m=+25.887325028 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.237729 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.253764 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.274738 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.318263 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320619 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.320673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320804 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320811 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320849 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320860 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320865 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320882 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320823 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321021 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.320907 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.320893935 +0000 UTC m=+25.988663513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321084 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.321061169 +0000 UTC m=+25.988830817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321105 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.32109689 +0000 UTC m=+25.988866568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.321120 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:35.32111163 +0000 UTC m=+25.988881308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.356251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.362291 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.396048 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.433782 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.458886 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.475149 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.514474 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.557703 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.561985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.562094 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.584202 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:42:43.307415565 +0000 UTC Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.596914 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.621648 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.621648 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.621859 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:31 crc kubenswrapper[4760]: E0121 15:47:31.621882 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.636213 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.665180 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.767573 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.869945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.869991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.870000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.870016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.870026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.891765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-nqxc7" event={"ID":"4ad0b627-e961-4ca1-9d20-35844f88fac1","Type":"ContainerStarted","Data":"33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895661 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.895788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.897721 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503" exitCode=0 Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.897820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.909154 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.950902 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.965022 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.973120 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:31Z","lastTransitionTime":"2026-01-21T15:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.977138 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:31 crc kubenswrapper[4760]: I0121 15:47:31.992726 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:31Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.004599 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.024265 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.043580 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.057091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.071959 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.076697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.084619 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.115456 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.159392 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179536 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.179620 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.196252 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.235930 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.280876 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.282905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.316194 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.361013 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385350 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385359 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.385389 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.403446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.439226 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.477790 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.487948 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.516613 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.561203 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.586208 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:14:21.367129203 +0000 UTC Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590611 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.590727 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.596807 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.621893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:32 crc kubenswrapper[4760]: E0121 15:47:32.621999 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.637999 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.675145 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693382 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.693393 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.715578 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.756456 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795500 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.795511 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.797034 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.835895 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.897874 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.902121 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705" exitCode=0 Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.902684 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705"} Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.915670 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.934543 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.957130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.998034 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:32Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:32 crc kubenswrapper[4760]: I0121 15:47:32.999658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:32Z","lastTransitionTime":"2026-01-21T15:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.034537 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.079351 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102613 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.102680 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.119388 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.160364 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.196837 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.205887 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.240882 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.283706 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.307760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.323877 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.359847 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.397130 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.410976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.411109 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.437234 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.514452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.587404 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:35:00.155593109 +0000 UTC Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.617797 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.622202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:33 crc kubenswrapper[4760]: E0121 15:47:33.622374 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.622450 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:33 crc kubenswrapper[4760]: E0121 15:47:33.622615 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720639 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.720650 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824465 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.824521 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.909057 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27" exitCode=0 Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.909134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.914111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.929875 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:33Z","lastTransitionTime":"2026-01-21T15:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.930017 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.953865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.971177 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:33 crc kubenswrapper[4760]: I0121 15:47:33.988137 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:33Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.004348 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.020654 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.032794 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.038440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.053071 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.066175 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.075683 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.088134 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.100009 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.115284 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.128406 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.135671 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.146599 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.238522 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.340532 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.443818 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.550962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.551636 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.588086 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:02:01.853375979 +0000 UTC Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.621525 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:34 crc kubenswrapper[4760]: E0121 15:47:34.621689 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.659982 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.763722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.866382 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.921043 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerStarted","Data":"ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.937378 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.951745 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.965676 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.968246 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:34Z","lastTransitionTime":"2026-01-21T15:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.980209 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:34 crc kubenswrapper[4760]: I0121 15:47:34.991873 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:34Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.005196 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.018014 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.031963 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.048033 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.064470 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.070777 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.079956 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.098849 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.121582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.139409 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.154156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173557 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173566 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.173591 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.260196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.260426 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.260402927 +0000 UTC m=+33.928172515 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.275969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276034 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.276045 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361204 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361283 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361354 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.361426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361444 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361514 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.361497156 +0000 UTC m=+34.029266734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361534 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361589 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.361571238 +0000 UTC m=+34.029340856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361678 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361729 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361751 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361845 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.361818683 +0000 UTC m=+34.029588291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361970 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.361988 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.362004 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.362072 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.362057499 +0000 UTC m=+34.029827107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.378568 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.481913 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.585273 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.588600 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:07:59.923003604 +0000 UTC Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.622022 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.622205 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:35 crc kubenswrapper[4760]: E0121 15:47:35.622274 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.688493 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.791295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.894311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.895626 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:35Z","lastTransitionTime":"2026-01-21T15:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.927684 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09" exitCode=0 Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.928134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09"} Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.945452 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.960582 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.976487 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:35 crc kubenswrapper[4760]: I0121 15:47:35.991646 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:35Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.001692 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.005120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.019184 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.033507 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.046754 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.060366 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.075391 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.089492 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.104121 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.105554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.124202 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.135140 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.151187 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206835 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.206896 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.308983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309036 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.309046 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.411611 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514106 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.514144 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.588799 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:53:20.307664177 +0000 UTC Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.616492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.622528 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:36 crc kubenswrapper[4760]: E0121 15:47:36.622705 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.719556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.821897 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924419 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924485 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.924496 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:36Z","lastTransitionTime":"2026-01-21T15:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.936390 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.936727 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.936769 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.941031 4760 generic.go:334] "Generic (PLEG): container finished" podID="bd3c6c18-f174-4022-96c5-5892413c76fd" containerID="f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6" exitCode=0 Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.941077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerDied","Data":"f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6"} Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.951403 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.966659 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:36 crc kubenswrapper[4760]: I0121 15:47:36.985385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:36Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.006768 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.022448 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.026352 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.034457 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.040286 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.054527 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.075067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.087656 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.101123 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.119410 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.129550 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.137043 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.154177 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.170577 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.182865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.198981 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.218590 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.232973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.236924 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.263101 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.274938 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.287526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.302801 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.325763 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.336227 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.341499 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.357269 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.371283 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.383754 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.395242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.410588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.424488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.439295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.542749 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.589386 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:00:52.662326716 +0000 UTC Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.621881 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:37 crc kubenswrapper[4760]: E0121 15:47:37.622075 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.621887 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:37 crc kubenswrapper[4760]: E0121 15:47:37.622271 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.645760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.748891 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.851349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.947378 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" event={"ID":"bd3c6c18-f174-4022-96c5-5892413c76fd","Type":"ContainerStarted","Data":"fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.948053 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.953880 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:37Z","lastTransitionTime":"2026-01-21T15:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.963763 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.970582 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.977556 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:37 crc kubenswrapper[4760]: I0121 15:47:37.992633 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:37Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.007546 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.023387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.042589 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.055474 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.056517 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.070815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.085018 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.106459 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.121365 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.136440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.149827 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159372 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159384 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.159412 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.161579 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.171577 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.189672 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.201625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.212187 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.221218 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.237092 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.247584 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261797 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.261828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.264085 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.279300 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.289371 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.298496 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.311712 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.327136 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.341502 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.354135 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.362615 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.371726 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.375121 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.379878 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.391395 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.394953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.394989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.395000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.395015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.395026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.405871 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.408907 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.420704 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.423994 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.436082 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:38Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.436245 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.437989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438024 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.438047 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541421 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541480 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.541498 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.590048 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:16:31.048207763 +0000 UTC Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.621555 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:38 crc kubenswrapper[4760]: E0121 15:47:38.621725 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.644313 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.747261 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.850306 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952517 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:38 crc kubenswrapper[4760]: I0121 15:47:38.952551 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:38Z","lastTransitionTime":"2026-01-21T15:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.055602 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.158379 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.261667 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363587 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.363674 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.466251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.569703 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.591088 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 00:46:15.844774577 +0000 UTC Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.621937 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.622045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:39 crc kubenswrapper[4760]: E0121 15:47:39.622112 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:39 crc kubenswrapper[4760]: E0121 15:47:39.622253 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.641900 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.657652 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.670992 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.672915 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.684435 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.707001 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.727936 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.741552 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.756494 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774873 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.774995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.775011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.775023 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.877958 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.958282 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/0.log" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.962502 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822" exitCode=1 Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.962638 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.963591 4760 scope.go:117] "RemoveContainer" containerID="b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.980649 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:39Z","lastTransitionTime":"2026-01-21T15:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:39 crc kubenswrapper[4760]: I0121 15:47:39.984554 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:39Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.006063 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.021815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.035433 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.046694 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.060043 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.082855 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083716 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.083800 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.102277 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.114588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.129990 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.142940 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.157616 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.170771 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.181910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.185758 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.191428 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.202879 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.217556 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.228688 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.240044 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.251739 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.267085 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.287759 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.390826 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492894 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.492921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.591942 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:15:53.576887244 +0000 UTC Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.596123 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.622373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:40 crc kubenswrapper[4760]: E0121 15:47:40.622544 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.698552 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800789 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.800804 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.898775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.903108 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:40Z","lastTransitionTime":"2026-01-21T15:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.915063 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.927655 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.938400 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.954982 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.968830 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/0.log" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.972755 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.972831 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056"} Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.973498 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.986126 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:40 crc kubenswrapper[4760]: I0121 15:47:40.999612 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:40Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.005140 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.011367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.032205 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.058556 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.078422 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.100801 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108141 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.108153 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.113949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.128275 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.141268 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.155080 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.165909 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.179775 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.193077 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.209067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.210428 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.221586 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.239036 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.259368 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.272152 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.285211 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.295216 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.307819 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.312243 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.320454 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.332270 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.345174 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.415100 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.518296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.592662 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:38:35.975159062 +0000 UTC Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.621778 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.621858 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:41 crc kubenswrapper[4760]: E0121 15:47:41.621990 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:41 crc kubenswrapper[4760]: E0121 15:47:41.622123 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.623538 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.664801 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc"] Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.665262 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.668680 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.668955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.688093 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.705629 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.718675 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.726208 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.736269 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.758960 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765406 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765501 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-kube-api-access-nrt5f\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.765552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.771541 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.781268 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.793864 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.806942 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.823501 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.828869 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.839677 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.851472 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.863875 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867066 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-kube-api-access-nrt5f\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867516 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.867541 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.868223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.869106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.872774 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.877317 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.884126 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrt5f\" (UniqueName: \"kubernetes.io/projected/ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df-kube-api-access-nrt5f\") pod \"ovnkube-control-plane-749d76644c-k8rxc\" (UID: \"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.890129 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.906752 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:41Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.931841 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:41Z","lastTransitionTime":"2026-01-21T15:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:41 crc kubenswrapper[4760]: I0121 15:47:41.979219 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" Jan 21 15:47:41 crc kubenswrapper[4760]: W0121 15:47:41.990754 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff8d9ad0_e9fd_4b9e_b0cc_7072ffcce6df.slice/crio-dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80 WatchSource:0}: Error finding container dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80: Status 404 returned error can't find the container with id dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80 Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.036793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.037957 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.140732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.141037 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.243766 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.347864 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.450798 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554349 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.554377 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.594391 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:21:44.391640272 +0000 UTC Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.621916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.622046 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.658680 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.760800 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.766041 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bbr8l"] Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.766526 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.766590 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.779090 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.790726 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.806030 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.820230 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.837530 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.849722 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.862973 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863805 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.863917 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.877947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.878007 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-465m7\" (UniqueName: \"kubernetes.io/projected/0a4b6476-7a89-41b4-b918-5628f622c7c1-kube-api-access-465m7\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.885064 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.901777 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.915099 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.925588 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.943221 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.956618 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966819 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.966833 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:42Z","lastTransitionTime":"2026-01-21T15:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.969344 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.978459 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.978516 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-465m7\" (UniqueName: \"kubernetes.io/projected/0a4b6476-7a89-41b4-b918-5628f622c7c1-kube-api-access-465m7\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.978636 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.978701 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:43.478682141 +0000 UTC m=+34.146451719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.981407 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.981792 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.981999 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/0.log" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.984234 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" exitCode=1 Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.984301 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.984397 4760 scope.go:117] "RemoveContainer" containerID="b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.985263 4760 scope.go:117] "RemoveContainer" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" Jan 21 15:47:42 crc kubenswrapper[4760]: E0121 15:47:42.985507 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.987160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" event={"ID":"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df","Type":"ContainerStarted","Data":"ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.987222 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" event={"ID":"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df","Type":"ContainerStarted","Data":"dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.987237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" event={"ID":"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df","Type":"ContainerStarted","Data":"dba700ad96b5c9d8e389f2cb73284c54ff7f89b36f0136bb45d08f801d4deb80"} Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.993057 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:42Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:42 crc kubenswrapper[4760]: I0121 15:47:42.996456 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-465m7\" (UniqueName: \"kubernetes.io/projected/0a4b6476-7a89-41b4-b918-5628f622c7c1-kube-api-access-465m7\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.004785 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.019528 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.031091 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.040454 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.049426 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.060639 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.068914 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.076543 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.088978 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.099865 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.113872 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.127055 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.141829 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.153918 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171790 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171835 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.171811 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.188477 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.200142 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.210724 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.221535 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:43Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.274946 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.281264 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.281415 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.281390326 +0000 UTC m=+49.949159924 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378168 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.378214 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382762 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382793 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.382817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382886 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382954 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.382935145 +0000 UTC m=+50.050704723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382956 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382968 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383024 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383039 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383100 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.383082838 +0000 UTC m=+50.050852416 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382985 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383156 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.382996 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383251 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.383230961 +0000 UTC m=+50.051000569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.383285 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:59.383276012 +0000 UTC m=+50.051045590 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.480977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.481055 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.483793 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.483959 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.484029 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:44.484009863 +0000 UTC m=+35.151779441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.583431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.583796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.583982 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.584220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.584697 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.594961 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:46:17.123490352 +0000 UTC Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.622319 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.622335 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.622525 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:43 crc kubenswrapper[4760]: E0121 15:47:43.622558 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.691891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.692502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.795452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898299 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.898309 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:43Z","lastTransitionTime":"2026-01-21T15:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:43 crc kubenswrapper[4760]: I0121 15:47:43.991662 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.000224 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103264 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.103296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.205939 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.308925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411955 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.411986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.412000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.496424 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.496700 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.496894 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:46.496861938 +0000 UTC m=+37.164631546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515087 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.515121 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.596013 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:45:52.616563531 +0000 UTC Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.616991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.617000 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.622450 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.622490 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:44 crc kubenswrapper[4760]: E0121 15:47:44.622591 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.719644 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822818 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.822954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:44 crc kubenswrapper[4760]: I0121 15:47:44.925775 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:44Z","lastTransitionTime":"2026-01-21T15:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.028958 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132649 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.132693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.236117 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.338879 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.442431 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545793 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.545833 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.596444 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 21:50:23.217020877 +0000 UTC Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.622652 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.623539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:45 crc kubenswrapper[4760]: E0121 15:47:45.623722 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:45 crc kubenswrapper[4760]: E0121 15:47:45.623871 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649447 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.649560 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752228 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.752316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854406 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.854513 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:45 crc kubenswrapper[4760]: I0121 15:47:45.957405 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:45Z","lastTransitionTime":"2026-01-21T15:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060344 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.060481 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.163686 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.270995 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271049 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.271080 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.373718 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.476096 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.515868 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.516017 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.516102 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:50.516078698 +0000 UTC m=+41.183848276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.579740 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.597105 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:33:49.72460161 +0000 UTC Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.621961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.621961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.622155 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:46 crc kubenswrapper[4760]: E0121 15:47:46.622238 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682285 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.682348 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.784345 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.886972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.887057 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989959 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.989999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:46 crc kubenswrapper[4760]: I0121 15:47:46.990014 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:46Z","lastTransitionTime":"2026-01-21T15:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.094100 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.196746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.299364 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.402140 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.504315 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.597638 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:33:27.413782527 +0000 UTC Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.606485 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.621862 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.621893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:47 crc kubenswrapper[4760]: E0121 15:47:47.622076 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:47 crc kubenswrapper[4760]: E0121 15:47:47.622130 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.709980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.710158 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.812972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813050 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.813093 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:47 crc kubenswrapper[4760]: I0121 15:47:47.916827 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:47Z","lastTransitionTime":"2026-01-21T15:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020085 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020113 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.020124 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123355 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.123368 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.227161 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330464 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330502 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.330514 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.433954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536593 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.536678 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.548316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.564417 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569358 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.569387 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.581555 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.585998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586079 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.586153 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.598123 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 13:17:44.360492932 +0000 UTC Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.599647 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604509 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604556 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.604574 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.622352 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.622534 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.622687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.623297 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.626999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.627009 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.639269 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:48Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:48 crc kubenswrapper[4760]: E0121 15:47:48.639428 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641295 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.641421 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.744971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745122 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.745138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847753 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847803 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.847847 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949562 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:48 crc kubenswrapper[4760]: I0121 15:47:48.949589 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:48Z","lastTransitionTime":"2026-01-21T15:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053014 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053062 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.053087 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156435 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156453 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.156466 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.259546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.260070 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363249 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363294 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363303 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.363346 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.502674 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.599064 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:02:38.115463243 +0000 UTC Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605239 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.605266 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.621658 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.621805 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:49 crc kubenswrapper[4760]: E0121 15:47:49.621871 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:49 crc kubenswrapper[4760]: E0121 15:47:49.621950 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.641926 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.655892 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.672516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.684696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.710965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.711063 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.714661 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b573d9532629b49e9ea199764c2df9e0e826644b82512cc1bfbca108a0d7d822\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:39Z\\\",\\\"message\\\":\\\"9.137015 6029 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 15:47:39.135846 6029 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135869 6029 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135893 6029 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.135924 6029 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 15:47:39.136981 6029 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 15:47:39.137810 6029 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 15:47:39.137853 6029 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 15:47:39.137881 6029 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 15:47:39.137930 6029 factory.go:656] Stopping watch factory\\\\nI0121 15:47:39.137948 6029 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 15:47:39.137946 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 15:47:39.137976 6029 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 15:47:39.137996 6029 ovnkube.go:599] Stopped ovnkube\\\\nI0121 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.726066 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.746381 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.760638 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.772776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.782464 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.798721 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.814807 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.815970 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.831125 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.844903 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.859845 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.873029 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.885763 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:49Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:49 crc kubenswrapper[4760]: I0121 15:47:49.917757 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:49Z","lastTransitionTime":"2026-01-21T15:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.021264 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.125423 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227845 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.227875 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330232 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.330268 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.432762 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.535824 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.561541 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.561686 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.561744 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:47:58.561728503 +0000 UTC m=+49.229498081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.599843 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:56:30.369035408 +0000 UTC Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.622244 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.622439 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.622604 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:50 crc kubenswrapper[4760]: E0121 15:47:50.622853 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.638687 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.740919 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.843578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:50 crc kubenswrapper[4760]: I0121 15:47:50.946412 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:50Z","lastTransitionTime":"2026-01-21T15:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.049918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.050076 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.151950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.152039 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.254911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.254991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.255015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.255043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.255066 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.358897 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.462211 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564504 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.564540 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.600558 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:56:37.778735606 +0000 UTC Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.621942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.621965 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:51 crc kubenswrapper[4760]: E0121 15:47:51.622140 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:51 crc kubenswrapper[4760]: E0121 15:47:51.622234 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.666940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.666991 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.667000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.667015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.667026 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.769656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.872524 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:51 crc kubenswrapper[4760]: I0121 15:47:51.975677 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:51Z","lastTransitionTime":"2026-01-21T15:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078590 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.078669 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181385 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.181429 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283728 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.283736 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.385575 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.487941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.488099 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.590884 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.601405 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:24:26.648060325 +0000 UTC Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.621792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.621893 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:52 crc kubenswrapper[4760]: E0121 15:47:52.621932 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:52 crc kubenswrapper[4760]: E0121 15:47:52.622086 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.693955 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796686 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.796719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899602 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:52 crc kubenswrapper[4760]: I0121 15:47:52.899691 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:52Z","lastTransitionTime":"2026-01-21T15:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.002314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106171 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.106228 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.208271 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310097 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.310163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413155 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413173 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.413225 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516630 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.516675 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.601831 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:17:53.591745905 +0000 UTC Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.619777 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.622165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.622268 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:53 crc kubenswrapper[4760]: E0121 15:47:53.622312 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:53 crc kubenswrapper[4760]: E0121 15:47:53.622540 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.723480 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.826489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.826901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.827202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.827482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.827865 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.931714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:53 crc kubenswrapper[4760]: I0121 15:47:53.932675 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:53Z","lastTransitionTime":"2026-01-21T15:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035192 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.035239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.137530 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.137750 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.137841 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.138005 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.138075 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240928 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.240957 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.344985 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447279 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447341 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.447387 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.550230 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.602859 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:18:59.939958271 +0000 UTC Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.621415 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.621498 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:54 crc kubenswrapper[4760]: E0121 15:47:54.621557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:54 crc kubenswrapper[4760]: E0121 15:47:54.621645 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.653695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.653968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.654074 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.654204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.654292 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.757169 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.859844 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:54 crc kubenswrapper[4760]: I0121 15:47:54.962829 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:54Z","lastTransitionTime":"2026-01-21T15:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065142 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.065177 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.169654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170375 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.170598 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273971 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.273987 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.377543 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479669 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.479752 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.581994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.582012 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.604275 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:00:51.168571158 +0000 UTC Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.622003 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.622013 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:55 crc kubenswrapper[4760]: E0121 15:47:55.622247 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:55 crc kubenswrapper[4760]: E0121 15:47:55.622297 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685415 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.685568 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789307 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.789350 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891719 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.891729 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994498 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994570 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:55 crc kubenswrapper[4760]: I0121 15:47:55.994582 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:55Z","lastTransitionTime":"2026-01-21T15:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.097274 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200351 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.200389 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.302906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.303114 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.410272 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.512864 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.605386 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:35:19.674348454 +0000 UTC Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.615818 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.622278 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:56 crc kubenswrapper[4760]: E0121 15:47:56.622434 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.622276 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:56 crc kubenswrapper[4760]: E0121 15:47:56.622661 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.623942 4760 scope.go:117] "RemoveContainer" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.641445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.660864 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.685613 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.698283 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.708445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718925 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.718937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.722816 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.733661 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.745373 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.758123 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.777277 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.791622 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.802459 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821008 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.821556 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.832718 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.845148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.856200 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.872616 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:56Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:56 crc kubenswrapper[4760]: I0121 15:47:56.924973 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:56Z","lastTransitionTime":"2026-01-21T15:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027897 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.027919 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.107611 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.111546 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.112109 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.130625 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132282 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.132293 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.147565 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.163362 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.175640 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.191766 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.209666 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.224409 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.234446 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.241696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.262011 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.289387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.308827 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.323001 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.335312 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.336813 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.347885 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.360911 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.372925 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.392915 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.438936 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.540990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541078 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.541090 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.606596 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:29:53.691837843 +0000 UTC Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.622451 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:57 crc kubenswrapper[4760]: E0121 15:47:57.622528 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:57 crc kubenswrapper[4760]: E0121 15:47:57.622692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.643519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.746894 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850314 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850364 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850390 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.850408 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:57 crc kubenswrapper[4760]: I0121 15:47:57.952782 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:57Z","lastTransitionTime":"2026-01-21T15:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.055997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.056078 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.159184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261730 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.261759 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.364289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.466303 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569345 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.569360 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.607673 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:48:11.414280787 +0000 UTC Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.620664 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.620901 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.620974 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:14.620954057 +0000 UTC m=+65.288723655 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.621886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.621944 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.622073 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.622155 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672658 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.672732 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.774981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775022 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775043 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.775053 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.877744 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899337 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899407 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.899420 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.918244 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.923468 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.939024 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.943634 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.958811 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.962949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.963067 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.978925 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984503 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984582 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:58 crc kubenswrapper[4760]: I0121 15:47:58.984595 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:58Z","lastTransitionTime":"2026-01-21T15:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.998817 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:58Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:58 crc kubenswrapper[4760]: E0121 15:47:58.998988 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001056 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.001065 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104414 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.104446 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.120701 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.121276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/1.log" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.125298 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" exitCode=1 Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.125370 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.125447 4760 scope.go:117] "RemoveContainer" containerID="173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.126237 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.126442 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.142067 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.164257 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.184731 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.197815 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207286 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207371 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.207402 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.211432 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.223402 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.237891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.254374 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.268486 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.279709 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.292031 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.303234 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309723 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.309755 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.315041 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.325670 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.326218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.326474 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.326455431 +0000 UTC m=+81.994225019 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.340254 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.350758 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.362666 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412436 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.412524 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427352 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427445 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.427488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427633 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427716 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.427698083 +0000 UTC m=+82.095467661 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427641 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427783 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427797 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427866 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.427842866 +0000 UTC m=+82.095612494 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427881 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.427955 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.427935908 +0000 UTC m=+82.095705476 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428190 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428294 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428414 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.428565 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:31.428542282 +0000 UTC m=+82.096311870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515091 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.515128 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.608560 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:15:59.807522339 +0000 UTC Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617687 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.617725 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.621999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.622058 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.622109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:47:59 crc kubenswrapper[4760]: E0121 15:47:59.622222 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.647160 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.660348 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.673062 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.686144 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.713855 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720082 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720156 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720175 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.720187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.726605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.740469 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.753122 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.763442 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.775561 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.790467 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.802746 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.818413 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822346 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.822377 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.831010 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.846601 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.858139 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.869603 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:59Z is after 2025-08-24T17:21:41Z" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924811 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.924890 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:47:59Z","lastTransitionTime":"2026-01-21T15:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:47:59 crc kubenswrapper[4760]: I0121 15:47:59.990382 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.002781 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.002824 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.017148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028047 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.028184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.029680 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.044440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.055304 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.070819 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.089387 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.102898 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.120248 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.133730 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.134436 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.136089 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.150529 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.163644 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.178013 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.189913 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.200748 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.215428 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.228374 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:00Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236810 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.236835 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339768 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.339893 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442837 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442908 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.442920 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545745 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.545755 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.608909 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:59:10.837990585 +0000 UTC Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.622354 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:00 crc kubenswrapper[4760]: E0121 15:48:00.622503 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.622875 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:00 crc kubenswrapper[4760]: E0121 15:48:00.622933 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.648417 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750477 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750522 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.750544 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852624 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.852633 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:00 crc kubenswrapper[4760]: I0121 15:48:00.955280 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:00Z","lastTransitionTime":"2026-01-21T15:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058290 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.058349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.160288 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262527 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.262569 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.365301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468777 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468852 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.468889 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.572359 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.609969 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:15:12.424952184 +0000 UTC Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.621647 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.621741 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:01 crc kubenswrapper[4760]: E0121 15:48:01.621827 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:01 crc kubenswrapper[4760]: E0121 15:48:01.621922 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.674502 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.777852 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881159 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881212 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881258 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.881275 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984574 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:01 crc kubenswrapper[4760]: I0121 15:48:01.984658 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:01Z","lastTransitionTime":"2026-01-21T15:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087268 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087313 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087365 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.087407 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.189916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.189987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.190120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.190379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.190408 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293353 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293387 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.293414 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.395714 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.499704 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.603567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.604474 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.611290 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:23:41.393014967 +0000 UTC Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.621792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.621883 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:02 crc kubenswrapper[4760]: E0121 15:48:02.621997 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:02 crc kubenswrapper[4760]: E0121 15:48:02.622109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.707931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708026 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708039 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708071 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.708086 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.811929 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915103 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:02 crc kubenswrapper[4760]: I0121 15:48:02.915276 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:02Z","lastTransitionTime":"2026-01-21T15:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.018612 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.121974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122063 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.122150 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.224994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.225005 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328123 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.328174 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431798 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.431816 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.534944 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535017 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.535089 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.611695 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:01:48.027352315 +0000 UTC Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.622214 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.622381 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:03 crc kubenswrapper[4760]: E0121 15:48:03.622455 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:03 crc kubenswrapper[4760]: E0121 15:48:03.622593 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637760 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637813 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.637877 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741357 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.741398 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844890 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.844922 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:03 crc kubenswrapper[4760]: I0121 15:48:03.947877 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:03Z","lastTransitionTime":"2026-01-21T15:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050237 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050253 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.050262 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.152905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.255301 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359218 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359272 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.359293 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462764 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.462828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566497 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566519 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.566570 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.612669 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:14:56.421221358 +0000 UTC Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.622367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.622317 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:04 crc kubenswrapper[4760]: E0121 15:48:04.622571 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:04 crc kubenswrapper[4760]: E0121 15:48:04.622786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669538 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.669649 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771735 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771823 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.771839 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874239 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.874286 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:04 crc kubenswrapper[4760]: I0121 15:48:04.977444 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:04Z","lastTransitionTime":"2026-01-21T15:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.080533 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183618 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.183628 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.286305 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389774 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.389872 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493342 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.493380 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597275 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597427 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597466 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.597491 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.613641 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:51:05.171831825 +0000 UTC Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.622209 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.622270 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:05 crc kubenswrapper[4760]: E0121 15:48:05.622522 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:05 crc kubenswrapper[4760]: E0121 15:48:05.622681 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701070 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701158 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.701258 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805610 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.805657 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909220 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:05 crc kubenswrapper[4760]: I0121 15:48:05.909245 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:05Z","lastTransitionTime":"2026-01-21T15:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.011797 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.115282 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.217712 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320720 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.320756 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423782 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.423791 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527120 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.527154 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.614199 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:04:39.121431262 +0000 UTC Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.621465 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.621465 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:06 crc kubenswrapper[4760]: E0121 15:48:06.621958 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:06 crc kubenswrapper[4760]: E0121 15:48:06.622069 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.629801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630177 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.630354 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.733914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.733983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.733993 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.734013 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.734025 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836771 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.836805 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939461 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:06 crc kubenswrapper[4760]: I0121 15:48:06.939475 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:06Z","lastTransitionTime":"2026-01-21T15:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.042284 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.145992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.146089 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.248711 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.351880 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454529 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.454572 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557736 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.557749 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.615373 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:58:47.80201265 +0000 UTC Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.621930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.622007 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:07 crc kubenswrapper[4760]: E0121 15:48:07.622152 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:07 crc kubenswrapper[4760]: E0121 15:48:07.622224 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660643 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.660691 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763293 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.763311 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.866470 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969252 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969298 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969338 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:07 crc kubenswrapper[4760]: I0121 15:48:07.969347 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:07Z","lastTransitionTime":"2026-01-21T15:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072700 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.072711 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.174947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.277255 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.379983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.380008 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.482977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.483069 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.585937 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.585981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.585990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.586003 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.586013 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.616110 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:13:15.884388893 +0000 UTC Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.622481 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.622543 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:08 crc kubenswrapper[4760]: E0121 15:48:08.622656 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:08 crc kubenswrapper[4760]: E0121 15:48:08.622773 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688910 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.688944 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796899 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.796993 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900081 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:08 crc kubenswrapper[4760]: I0121 15:48:08.900148 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:08Z","lastTransitionTime":"2026-01-21T15:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003204 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003254 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.003295 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066343 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066354 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.066381 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.079142 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.083725 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.097142 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101398 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.101499 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.116563 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.120554 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.136850 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140801 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.140893 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.156578 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.156746 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.159143 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262309 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262411 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.262425 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.365259 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468484 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468493 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468513 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.468525 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.571159 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.616415 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 15:49:28.652240692 +0000 UTC Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.622288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.622368 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.622448 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:09 crc kubenswrapper[4760]: E0121 15:48:09.622786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.635690 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.646591 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.658746 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.671666 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675166 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.675207 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.685087 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.698595 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.711247 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.725010 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.740906 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.753601 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.766642 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777488 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777526 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777537 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.777567 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.780939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.795129 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.809630 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.826206 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.841036 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.860264 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173172c0e4dbb726e5d667c377492c729f527438ad7e7228ac7c881022c27056\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"34] Service openshift-console/downloads retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{downloads openshift-console bbc81ad7-5d87-40bf-82c5-a4db2311cff9 12322 0 2025-02-23 05:39:22 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[] map[operator.openshift.io/spec-hash:41d6e4f36bf41ab5be57dec2289f1f8807bbed4b0f642342f213a53bb3ff4d6d] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 8080 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: console,component: downloads,},ClusterIP:10.217.4.213,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.213],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0121 15:47:40.952691 6195 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initiali\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881827 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.881921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.883212 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:09Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985213 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:09 crc kubenswrapper[4760]: I0121 15:48:09.985237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:09Z","lastTransitionTime":"2026-01-21T15:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.088479 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191554 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.191572 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296193 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296223 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.296234 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399636 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399645 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.399673 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.502702 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606229 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.606352 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.618203 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 04:16:51.202105032 +0000 UTC Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.621553 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.621668 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:10 crc kubenswrapper[4760]: E0121 15:48:10.621739 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:10 crc kubenswrapper[4760]: E0121 15:48:10.621860 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709627 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709673 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.709718 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812183 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.812210 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914776 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:10 crc kubenswrapper[4760]: I0121 15:48:10.914787 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:10Z","lastTransitionTime":"2026-01-21T15:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018876 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.018921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121108 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.121149 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.224129 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327545 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.327569 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430098 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430114 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430136 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.430149 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532632 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.532736 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.618740 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:58:00.534059008 +0000 UTC Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.622217 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.622397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:11 crc kubenswrapper[4760]: E0121 15:48:11.622443 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:11 crc kubenswrapper[4760]: E0121 15:48:11.622632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635740 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.635798 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.738647 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842259 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842379 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.842452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945073 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945140 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:11 crc kubenswrapper[4760]: I0121 15:48:11.945200 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:11Z","lastTransitionTime":"2026-01-21T15:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048307 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048391 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.048451 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.151889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.151951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.151976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.152002 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.152018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.255924 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358511 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358546 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.358559 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461711 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461788 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461806 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.461852 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576709 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.576811 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.618875 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:09:28.75069069 +0000 UTC Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.622373 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.622403 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:12 crc kubenswrapper[4760]: E0121 15:48:12.622572 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:12 crc kubenswrapper[4760]: E0121 15:48:12.622717 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.623747 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:48:12 crc kubenswrapper[4760]: E0121 15:48:12.624081 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.637550 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.649412 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.662844 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680656 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.680629 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.692835 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.703366 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.719562 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.731577 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.746873 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.761975 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.773416 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783373 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783404 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.783448 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.793340 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.817054 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.834593 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.847440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.859461 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.872073 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.884648 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:12Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885763 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.885791 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988210 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988281 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:12 crc kubenswrapper[4760]: I0121 15:48:12.988317 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:12Z","lastTransitionTime":"2026-01-21T15:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.090981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.091115 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.192996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.193009 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294732 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.294760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.396920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.396977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.396987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.397007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.397018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.499754 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601957 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.601967 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.619634 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:42:13.911003414 +0000 UTC Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.622015 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.622135 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:13 crc kubenswrapper[4760]: E0121 15:48:13.622163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:13 crc kubenswrapper[4760]: E0121 15:48:13.622464 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.704942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.704983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.704992 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.705008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.705018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807831 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.807866 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910318 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:13 crc kubenswrapper[4760]: I0121 15:48:13.910343 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:13Z","lastTransitionTime":"2026-01-21T15:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.012984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013030 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013060 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.013073 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.116615 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218943 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218961 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.218972 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321800 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.321840 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424547 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.424630 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527209 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527225 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.527237 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.620578 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:11:28.226509551 +0000 UTC Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.621836 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.621892 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.621997 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.622133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.629996 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.692275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.692562 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:14 crc kubenswrapper[4760]: E0121 15:48:14.692689 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:48:46.692661046 +0000 UTC m=+97.360430714 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733137 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733195 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733205 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733224 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.733239 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836361 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836429 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.836462 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938400 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:14 crc kubenswrapper[4760]: I0121 15:48:14.938445 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:14Z","lastTransitionTime":"2026-01-21T15:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040520 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.040565 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144094 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.144203 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.198893 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/0.log" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.198972 4760 generic.go:334] "Generic (PLEG): container finished" podID="7300c51f-415f-4696-bda1-a9e79ae5704a" containerID="55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6" exitCode=1 Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.199014 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerDied","Data":"55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.199486 4760 scope.go:117] "RemoveContainer" containerID="55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.213696 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.226360 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.240212 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248548 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248583 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248592 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.248618 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.250846 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.261421 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.275562 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.287383 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.297242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.307589 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.316626 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.333058 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.350850 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351065 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351101 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.351126 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.364488 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.372992 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.381198 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.393167 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.405691 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.425498 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:15Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453053 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453090 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.453129 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555718 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.555752 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.621029 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:11:58.539772226 +0000 UTC Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.622314 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.622378 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:15 crc kubenswrapper[4760]: E0121 15:48:15.622470 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:15 crc kubenswrapper[4760]: E0121 15:48:15.622606 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658128 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.658207 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760505 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.760530 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862704 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.862823 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965267 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965310 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:15 crc kubenswrapper[4760]: I0121 15:48:15.965349 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:15Z","lastTransitionTime":"2026-01-21T15:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.068170 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170362 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170426 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.170463 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.203798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/0.log" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.203867 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.218484 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.230908 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.245168 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.254949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.265072 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272880 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272941 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.272981 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.280305 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.293111 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.307128 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.319997 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.333367 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.348749 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.361502 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.373369 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.374974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375016 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375031 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.375042 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.392024 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.404771 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.418413 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.430120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.450421 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:16Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477107 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477182 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.477195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580367 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580409 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580422 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.580452 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.621582 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:27:53.037952214 +0000 UTC Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.621746 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.621827 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:16 crc kubenswrapper[4760]: E0121 15:48:16.621866 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:16 crc kubenswrapper[4760]: E0121 15:48:16.622025 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682781 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682833 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682849 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.682862 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785431 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.785443 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.887934 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.887984 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.887997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.888018 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.888035 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991356 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:16 crc kubenswrapper[4760]: I0121 15:48:16.991450 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:16Z","lastTransitionTime":"2026-01-21T15:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094066 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.094096 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197194 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197219 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.197231 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300408 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300449 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300476 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.300488 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402916 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.402925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505653 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505663 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.505694 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608726 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608755 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.608771 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.621778 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:25:37.332451514 +0000 UTC Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.622161 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:17 crc kubenswrapper[4760]: E0121 15:48:17.622370 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.622765 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:17 crc kubenswrapper[4760]: E0121 15:48:17.622975 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711334 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711352 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.711363 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813470 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.813519 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916084 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916167 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:17 crc kubenswrapper[4760]: I0121 15:48:17.916182 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:17Z","lastTransitionTime":"2026-01-21T15:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018432 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018471 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018499 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.018528 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121261 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121274 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.121305 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223843 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223940 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.223957 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326110 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326151 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326181 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.326192 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428420 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.428459 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531561 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531571 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531586 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.531600 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.622265 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:41:19.864116864 +0000 UTC Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.622367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.622419 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:18 crc kubenswrapper[4760]: E0121 15:48:18.622488 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:18 crc kubenswrapper[4760]: E0121 15:48:18.622668 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634442 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634510 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.634551 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.736987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737033 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.737073 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839102 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.839180 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941878 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:18 crc kubenswrapper[4760]: I0121 15:48:18.941980 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:18Z","lastTransitionTime":"2026-01-21T15:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045154 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045216 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.045254 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.148413 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252670 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252714 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252724 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.252753 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355675 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355762 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.355797 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373607 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373661 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373676 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.373710 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.387505 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392742 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.392870 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.411281 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416093 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416118 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.416133 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.430170 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.434988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435045 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435096 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435121 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.435138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.450464 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454924 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454975 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.454994 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.455003 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.470163 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.470288 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471954 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471962 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471976 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.471985 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575727 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.575951 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.576005 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.622401 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:33:22.653172971 +0000 UTC Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.622453 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.622517 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.622643 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:19 crc kubenswrapper[4760]: E0121 15:48:19.622820 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.635397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.648093 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.660385 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.670978 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678527 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678900 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678914 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.678923 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.690270 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.699477 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.712440 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.725084 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.739788 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.754250 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.767217 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782533 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.782583 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.783712 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.796944 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.817243 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.836778 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.851646 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.866891 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:19Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884597 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884616 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.884629 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987273 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987283 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987301 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:19 crc kubenswrapper[4760]: I0121 15:48:19.987313 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:19Z","lastTransitionTime":"2026-01-21T15:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089889 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.089899 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.193584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.193917 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.194000 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.194104 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.194185 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.296963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297019 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297051 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.297063 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399199 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399247 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399277 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.399289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.501963 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502027 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502061 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.502077 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604594 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.604693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.621464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:20 crc kubenswrapper[4760]: E0121 15:48:20.621587 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.621676 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:20 crc kubenswrapper[4760]: E0121 15:48:20.621889 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.623437 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:51:19.166660933 +0000 UTC Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.707482 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810376 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810437 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.810472 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.913626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.913869 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.913981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.914067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:20 crc kubenswrapper[4760]: I0121 15:48:20.914234 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:20Z","lastTransitionTime":"2026-01-21T15:48:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016875 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.016901 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.118887 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220483 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.220492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322930 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322946 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.322954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425746 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425787 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425799 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425816 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.425828 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528600 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528659 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.528690 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.622087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:21 crc kubenswrapper[4760]: E0121 15:48:21.622264 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.622265 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:21 crc kubenswrapper[4760]: E0121 15:48:21.622480 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.624107 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:52:00.91224731 +0000 UTC Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630862 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630891 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.630901 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733410 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733443 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733468 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.733478 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.835996 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836029 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.836053 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938633 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:21 crc kubenswrapper[4760]: I0121 15:48:21.938726 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:21Z","lastTransitionTime":"2026-01-21T15:48:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041646 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041684 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041712 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.041722 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144621 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.144911 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247424 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247458 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.247484 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350459 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350553 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350579 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.350598 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.453743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454068 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.454380 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.556953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.557061 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.621478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.621598 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:22 crc kubenswrapper[4760]: E0121 15:48:22.621893 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:22 crc kubenswrapper[4760]: E0121 15:48:22.622035 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.624892 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:58:52.172144885 +0000 UTC Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.635423 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660541 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660853 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660929 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.660997 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.763932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.763977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.763988 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.764007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.764020 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.866931 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969795 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969851 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:22 crc kubenswrapper[4760]: I0121 15:48:22.969876 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:22Z","lastTransitionTime":"2026-01-21T15:48:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072467 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072543 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.072557 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174866 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174879 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.174905 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277628 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277685 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.277696 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380678 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380690 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380710 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.380725 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483919 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483939 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.483947 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587601 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.587761 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.622236 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.622304 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:23 crc kubenswrapper[4760]: E0121 15:48:23.622415 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:23 crc kubenswrapper[4760]: E0121 15:48:23.622601 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.625076 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 15:39:48.172214344 +0000 UTC Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690255 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.690266 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792868 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.792886 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896703 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896743 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896754 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896772 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.896784 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999127 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:23 crc kubenswrapper[4760]: I0121 15:48:23.999196 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:23Z","lastTransitionTime":"2026-01-21T15:48:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102473 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102555 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.102584 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206088 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.206101 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308481 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.308572 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411402 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411516 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.411592 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515931 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515966 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.515998 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.516021 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619923 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.619991 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:24 crc kubenswrapper[4760]: E0121 15:48:24.622220 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:24 crc kubenswrapper[4760]: E0121 15:48:24.622367 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.625195 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:35:44.770917487 +0000 UTC Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723617 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723688 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723707 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.723719 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825814 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.825839 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929149 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929236 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:24 crc kubenswrapper[4760]: I0121 15:48:24.929251 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:24Z","lastTransitionTime":"2026-01-21T15:48:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031346 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031363 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031388 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.031410 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134508 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134558 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134569 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.134609 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236164 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236231 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.236240 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338796 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338859 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.338922 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441486 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441496 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441515 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.441524 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545874 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.545995 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.622045 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.622955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.623196 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:48:25 crc kubenswrapper[4760]: E0121 15:48:25.623238 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:25 crc kubenswrapper[4760]: E0121 15:48:25.623556 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.625439 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:03:59.108170486 +0000 UTC Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650162 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650260 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650292 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.650316 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753677 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.753737 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.856986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857032 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857040 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.857067 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.959938 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.959981 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.959990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.960007 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:25 crc kubenswrapper[4760]: I0121 15:48:25.960017 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:25Z","lastTransitionTime":"2026-01-21T15:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.062915 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.062979 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.062997 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.063023 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.063040 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.165455 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.165766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.165965 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.166201 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.166495 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270668 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270812 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270844 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.270867 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373644 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373674 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.373696 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478059 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478197 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478246 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.478296 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581528 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581873 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.581986 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.582059 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.633943 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:22:21.993574908 +0000 UTC Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.634005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:26 crc kubenswrapper[4760]: E0121 15:48:26.634138 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.633984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:26 crc kubenswrapper[4760]: E0121 15:48:26.634401 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684478 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684523 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684531 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684550 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.684564 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787366 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787439 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787457 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787479 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.787496 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891211 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891262 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891278 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.891288 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994867 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:26 crc kubenswrapper[4760]: I0121 15:48:26.994925 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:26Z","lastTransitionTime":"2026-01-21T15:48:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098057 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.098085 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200855 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.200918 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.241632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.245710 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303865 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303935 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.303970 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.407920 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.407972 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.407983 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.408004 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.408016 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510691 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510744 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.510774 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613715 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613802 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613857 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.613881 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.622131 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.622178 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:27 crc kubenswrapper[4760]: E0121 15:48:27.622381 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:27 crc kubenswrapper[4760]: E0121 15:48:27.622594 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.634155 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:57:59.174637845 +0000 UTC Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717380 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717403 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717433 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.717457 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820129 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820176 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820187 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.820212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922896 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922907 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:27 crc kubenswrapper[4760]: I0121 15:48:27.922940 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:27Z","lastTransitionTime":"2026-01-21T15:48:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.025990 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026042 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026054 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026076 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.026091 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128614 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128654 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128683 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.128693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231180 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.231212 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.273361 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.289892 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.308860 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.331681 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333947 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333978 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.333989 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.352772 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.380251 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.396828 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.419535 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.436933 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437069 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437169 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.437187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.452791 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.468445 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.488261 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.504392 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.521949 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.536176 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540008 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540072 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540099 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.540119 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.552605 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.568895 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.583156 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.595229 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.621601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:28 crc kubenswrapper[4760]: E0121 15:48:28.621822 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.621601 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:28 crc kubenswrapper[4760]: E0121 15:48:28.622410 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.634781 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:18:05.529058269 +0000 UTC Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643308 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643532 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643698 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643825 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.643950 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747731 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747824 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747854 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747888 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.747916 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850196 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850251 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.850299 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.952945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953083 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:28 crc kubenswrapper[4760]: I0121 15:48:28.953102 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:28Z","lastTransitionTime":"2026-01-21T15:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055288 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055311 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.055336 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158446 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158491 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.158510 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.253836 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.254738 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/2.log" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.258254 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" exitCode=1 Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.258300 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.258356 4760 scope.go:117] "RemoveContainer" containerID="8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.259243 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.259482 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260872 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260927 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.260936 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.280533 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.296624 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.311120 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.330526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.348283 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.360653 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.363913 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.363973 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.363987 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.364009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.364021 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.398066 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.417346 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.445057 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.461584 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466340 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466374 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466416 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.466426 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.474800 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.492885 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 15:48:28.146849 6804 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 15:48:28.146920 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:48:28.146916 6804 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.504947 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.516749 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.529574 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.540938 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.551194 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.564145 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568692 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568705 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568725 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.568737 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.574131 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.622160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.622160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.622444 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.622532 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.635486 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:38:30.549719724 +0000 UTC Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.640088 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.654202 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670792 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670847 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670863 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.670875 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.673350 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.684775 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.698089 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.714105 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.725303 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.736397 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.756933 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.769259 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.772999 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773037 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773048 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773067 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.773078 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.783921 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.796841 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.809073 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.826735 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e8c25d44d63ba5e418d4e013aa13b2ecf9c851b4dea690755cc9fde2b81efad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:47:58Z\\\",\\\"message\\\":\\\"re:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 15:47:57.608847 6401 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:47:57Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:47:57.608845 6401 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UU\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 15:48:28.146849 6804 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 15:48:28.146920 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:48:28.146916 6804 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.839886 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.853521 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862119 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862186 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.862198 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.868677 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.873600 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876702 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876751 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.876760 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.879906 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.891516 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.898221 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902759 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902780 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.902789 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.914035 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917207 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917234 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.917264 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.928440 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931794 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931817 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.931847 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.943793 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d0d32b68-30d1-4a80-9669-b44aebde12c8\\\",\\\"systemUUID\\\":\\\"2d155234-f7e3-4a8d-82a9-efdad3b8958b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:29Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:29 crc kubenswrapper[4760]: E0121 15:48:29.943899 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945518 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:29 crc kubenswrapper[4760]: I0121 15:48:29.945571 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:29Z","lastTransitionTime":"2026-01-21T15:48:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047783 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047829 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047846 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047885 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.047899 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150830 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150905 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150932 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.150987 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253560 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253638 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253666 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.253720 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.263712 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356699 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356758 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356767 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.356794 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.363997 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.364755 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:30 crc kubenswrapper[4760]: E0121 15:48:30.364909 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.379446 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1cda26f1-46aa-4bba-8048-c06c3ddec6b2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0121 15:47:26.978259 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 15:47:26.978430 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 15:47:26.979705 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3718584894/tls.crt::/tmp/serving-cert-3718584894/tls.key\\\\\\\"\\\\nI0121 15:47:27.371682 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 15:47:27.373554 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 15:47:27.373575 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 15:47:27.373596 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 15:47:27.373603 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 15:47:27.377499 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 15:47:27.377524 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377530 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 15:47:27.377535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 15:47:27.377538 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 15:47:27.377542 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 15:47:27.377545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 15:47:27.377565 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 15:47:27.379252 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.395242 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d0d5028d-4de2-4dda-ae30-dadeaa3eaf46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f330d26a2d002d23a401307cce13e0d816b1e572879693f30a3fa01dc4ad8a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1481b51556e723add302df7aae2c36688f3dcbc33b56907e24963b66bcbb5091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e8c255e683ef07dd2e8d3ce3cc7fe9be2f68f15c32a31d29048e8a12774322e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ab6060c3df944268200f5c7c581bf1923729d9e4bb2f91284e0e67e6937d627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.409526 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.430810 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lkblz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd3c6c18-f174-4022-96c5-5892413c76fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fbf7aa4ff7b0ecc831f88bd2dcdd99f141c9fa0316bd2a3cc9eb8a0241f0defc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bdacb58fbdd05790d3dca811ee3227725f593bd8f0607a897a6314def24706f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f64c2b8c462fd21f3fd4443eaa0b55d9425906e02b8182a2930f4a1d3fd58503\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a563ff2b27aecc6e8686a91c2e03e5387315358ec5888ec8242a6b1c3b625705\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf1cd61030a222e18b633b1a984753b6c6911d720111dd7a51bff85cd7c6af27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6342cfb054ab565dbb8a1c62ee835c0746ec0d93f5577b015fc48ecda93b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71916c1e372ecf888c4d21e05264822bcfcd0a8ca5793732ce4c609bd5f8ce6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5667k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lkblz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.444393 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8d9ad0-e9fd-4b9e-b0cc-7072ffcce6df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc3343040fe5b24cd28de48097d45276f8f81ad5ead92c7db4f417015e3d731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad55a1510883069f146d73179af7340208f024da02e1a3de1ff99a6a0805b9fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrt5f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:41Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k8rxc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.454776 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a4b6476-7a89-41b4-b918-5628f622c7c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-465m7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:42Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bbr8l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458752 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458779 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458791 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458809 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.458821 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.476148 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6c350ab-7705-4b08-a36e-19aacdce6c6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1325c938c1267b2a766b2f8c649de35a3cf885dbc3e737c573f06997121297d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56433eab8688b209fbd77a546844e40cb7d9398912c6326ee23925135c406d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e8b867d8581f573680150890b2c5a275020025a6acdcaccf3f72b879e74c37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f26cd173522da98d1a48179132a3ffa2bc1ef757d00df5d04cb9d68ebbea7ae0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73531acce19a36e1a6d4715fa5745aaccbaaa36423d048084b343bc61dbb1922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3daf774b888a95653400fcbad84fcd66c713ad349b3ada9c9fa55c74ef114f17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c037e9ea9957fc42b1359b1c20265663ae8c2f64c7fbf5bdfeaa52c7bd5e463\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://148536af55a942856537cfd43c2e928d1acb1efc1565dacb18ac422fe06a8428\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.486790 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eca65d-5425-4f48-8e4e-b10a336991ea\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://002688cbf99c341b0f62e4dbb27d8a6e8bca6e7dcd3a989456ba96bd86616d1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e38d82411fea97f87c89a87c697100f1cba1a5955ba69597d4709e8d3f8b284\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.499000 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.512722 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2e5966aedb32fce7518faca4ccee950711e67cb2881b6d537475701b6ac6c46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.523230 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a7fff61ff004ec8235bfddb3a9e0c3a82ac17591b1841d10b366f02833c2fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.539939 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:29Z\\\",\\\"message\\\":\\\"{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 15:48:28.146849 6804 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 15:48:28.146920 6804 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:28Z is after 2025-08-24T17:21:41Z]\\\\nI0121 15:48:28.146916 6804 model_client\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:48:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kv9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-gfprm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.554910 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f0cca8a-f164-4f14-8dfb-669907601207\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36614cde77ef4319b1700ba44c155a421d8da77c1a54ddfa8877c3dd7d92c1a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1fa756774ecd259251b45b1e2fb3f48ec2edcd82db462157b6be3752d723228\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81067b2f0d6ca7430be9fec6a7f1ef5258ae1aabb84e1181131db76fd1ce606e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562394 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562425 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562434 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562450 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.562460 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.569402 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d496d191016b0cef40ee8fbf976d96cfc44cd0019be46ec8eae45076fb7156e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035cb35f96e901f528272334d7a57e784bd16831574c20372d9eddbdbe3b195e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.582122 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.592102 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4g84s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40eabf28-9fbd-41ef-a858-de7ece013f68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826b8c8913e4adb8a55fa083b8e667ea3b2deb0150375e50e7c4f5ed7a5df244\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7f42p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4g84s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.601188 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-nqxc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad0b627-e961-4ca1-9d20-35844f88fac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33201b5ec0e28f2916354c9c63b13c3cdcb8776cba4df19d81098b28233d5c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-476wz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:30Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-nqxc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.613905 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dx99k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7300c51f-415f-4696-bda1-a9e79ae5704a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:48:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T15:48:14Z\\\",\\\"message\\\":\\\"2026-01-21T15:47:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd\\\\n2026-01-21T15:47:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_205bda0a-d38e-417c-9402-dd4c695a8fbd to /host/opt/cni/bin/\\\\n2026-01-21T15:47:29Z [verbose] multus-daemon started\\\\n2026-01-21T15:47:29Z [verbose] Readiness Indicator file check\\\\n2026-01-21T15:48:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:48:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v6m44\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dx99k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.622292 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.622377 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:30 crc kubenswrapper[4760]: E0121 15:48:30.622447 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:30 crc kubenswrapper[4760]: E0121 15:48:30.622566 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.625820 4760 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dd365e7-570c-4130-a299-30e376624ce2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T15:47:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a9a5be4bb0315a5105b75e1116563a1b2fc3e5acf1e0b35c9b49978f333b098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T15:47:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kxjbt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T15:47:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lp9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T15:48:30Z is after 2025-08-24T17:21:41Z" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.635959 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:17:34.423597163 +0000 UTC Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665492 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665549 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665567 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.665578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768535 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768588 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.768638 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871629 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871737 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.871754 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975565 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975595 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:30 crc kubenswrapper[4760]: I0121 15:48:30.975619 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:30Z","lastTransitionTime":"2026-01-21T15:48:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078729 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078748 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.078765 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181605 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181647 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181655 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.181683 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284086 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284125 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284138 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.284147 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.363299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.363541 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.363505116 +0000 UTC m=+146.031274694 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387306 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387335 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.387346 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464628 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.464671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464709 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464742 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464756 4760 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464804 4760 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464815 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464866 4760 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464860 4760 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464813 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.464795498 +0000 UTC m=+146.132565076 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.465008 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.464987743 +0000 UTC m=+146.132757321 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.465024 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.465014853 +0000 UTC m=+146.132784431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.464880 4760 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.465092 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:35.465075124 +0000 UTC m=+146.132844752 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490075 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490148 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.490157 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592922 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592933 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592950 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.592962 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.621870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.621954 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.622018 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:31 crc kubenswrapper[4760]: E0121 15:48:31.622125 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.637098 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:22:00.903981628 +0000 UTC Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695775 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695850 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.695926 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798635 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798701 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798741 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.798777 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902135 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902202 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:31 crc kubenswrapper[4760]: I0121 15:48:31.902278 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:31Z","lastTransitionTime":"2026-01-21T15:48:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006297 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006383 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006423 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.006436 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109174 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109190 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109214 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.109231 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212626 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.212675 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314864 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314893 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314903 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.314937 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417055 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417100 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417109 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.417138 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520463 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520577 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.520635 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.621611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.621631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:32 crc kubenswrapper[4760]: E0121 15:48:32.621784 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:32 crc kubenswrapper[4760]: E0121 15:48:32.621867 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623296 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623312 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.623338 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.637591 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 11:02:45.164215853 +0000 UTC Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727124 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727160 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727184 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.727195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.829926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830269 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830300 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.830312 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933482 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933540 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933568 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:32 crc kubenswrapper[4760]: I0121 15:48:32.933578 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:32Z","lastTransitionTime":"2026-01-21T15:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036089 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036157 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036178 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.036195 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138615 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138652 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138662 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.138691 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241475 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241534 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241551 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.241591 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.343906 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.343968 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.343985 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.344006 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.344018 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447596 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447641 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447651 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.447677 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551132 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551198 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551241 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.551258 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.622226 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.622441 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:33 crc kubenswrapper[4760]: E0121 15:48:33.622444 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:33 crc kubenswrapper[4760]: E0121 15:48:33.622648 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.638303 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:33:44.598610033 +0000 UTC Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654405 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654445 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654460 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.654470 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757512 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757580 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757591 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757606 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.757616 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860221 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860242 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.860281 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962926 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962964 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962974 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962989 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:33 crc kubenswrapper[4760]: I0121 15:48:33.962998 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:33Z","lastTransitionTime":"2026-01-21T15:48:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065208 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065250 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.065267 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167871 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167901 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.167912 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271738 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271860 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271902 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271936 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.271955 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375369 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375430 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375448 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.375492 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479370 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479401 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.479410 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583291 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583386 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583399 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583418 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.583431 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.621513 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.621584 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:34 crc kubenswrapper[4760]: E0121 15:48:34.621698 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:34 crc kubenswrapper[4760]: E0121 15:48:34.621767 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.638734 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:27:12.865328413 +0000 UTC Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685886 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685948 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685977 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.685990 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789657 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789665 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789681 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.789693 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893417 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893456 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893490 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.893507 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996599 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996682 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996696 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996713 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:34 crc kubenswrapper[4760]: I0121 15:48:34.996724 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:34Z","lastTransitionTime":"2026-01-21T15:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099564 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099620 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099637 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.099651 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202622 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202656 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202680 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202695 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.202703 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305397 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305440 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305454 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305472 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.305485 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409092 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409116 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409146 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.409171 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512542 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512604 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512623 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512650 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.512674 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615152 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615217 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615235 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615265 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.615289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.621564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.621604 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:35 crc kubenswrapper[4760]: E0121 15:48:35.621797 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:35 crc kubenswrapper[4760]: E0121 15:48:35.621978 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.639172 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:45:55.261231947 +0000 UTC Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718820 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718877 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718904 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.718959 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824521 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824609 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824667 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.824692 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928489 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928575 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928640 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:35 crc kubenswrapper[4760]: I0121 15:48:35.928664 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:35Z","lastTransitionTime":"2026-01-21T15:48:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032785 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032870 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032912 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.032927 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135280 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135289 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135304 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.135314 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.239967 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240011 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240020 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240035 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.240050 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342834 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342881 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342892 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342909 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.342921 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445395 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445441 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445452 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445469 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.445482 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548514 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548584 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548603 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548634 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.548664 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.622516 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.622614 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:36 crc kubenswrapper[4760]: E0121 15:48:36.622705 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:36 crc kubenswrapper[4760]: E0121 15:48:36.622950 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.640272 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:00:39.26125265 +0000 UTC Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651117 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651172 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651206 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.651223 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753245 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753257 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753276 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.753289 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856671 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856749 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856761 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856786 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.856799 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.958960 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959021 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959038 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:36 crc kubenswrapper[4760]: I0121 15:48:36.959050 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:36Z","lastTransitionTime":"2026-01-21T15:48:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062608 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062672 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062693 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.062739 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165708 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165747 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165756 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165773 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.165783 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268185 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268233 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268244 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268287 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.268298 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371828 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371839 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371856 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.371867 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474143 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474203 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474226 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.474238 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577052 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577126 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577144 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577165 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.577183 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.622047 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:37 crc kubenswrapper[4760]: E0121 15:48:37.622227 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.622455 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:37 crc kubenswrapper[4760]: E0121 15:48:37.622738 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.641238 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:45:13.088722606 +0000 UTC Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679721 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679784 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679808 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679838 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.679860 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782765 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782821 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782836 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782858 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.782874 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885766 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885815 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885826 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885840 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.885849 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988487 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988539 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988552 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988572 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:37 crc kubenswrapper[4760]: I0121 15:48:37.988586 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:37Z","lastTransitionTime":"2026-01-21T15:48:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091270 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091348 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091378 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.091390 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194161 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194215 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194227 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194248 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.194260 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296041 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296131 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296145 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.296184 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.398884 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.398958 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.398980 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.399015 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.399041 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502631 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502679 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502689 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.502716 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.605956 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606009 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606025 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606044 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.606057 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.621478 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.621549 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:38 crc kubenswrapper[4760]: E0121 15:48:38.621665 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:38 crc kubenswrapper[4760]: E0121 15:48:38.621816 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.641945 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:12:46.634923454 +0000 UTC Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708822 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708895 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708918 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708949 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.708971 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811697 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811757 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811778 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811804 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.811822 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915319 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915396 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915413 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:38 crc kubenswrapper[4760]: I0121 15:48:38.915456 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:38Z","lastTransitionTime":"2026-01-21T15:48:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018524 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018589 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018612 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018642 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.018664 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121095 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121130 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121139 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121153 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.121163 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223832 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223887 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223921 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223942 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.223954 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331256 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331316 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331360 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331381 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.331397 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434507 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434563 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434576 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434598 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.434613 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538028 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538111 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538133 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538163 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.538187 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.621669 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.621803 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:39 crc kubenswrapper[4760]: E0121 15:48:39.621986 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:39 crc kubenswrapper[4760]: E0121 15:48:39.622146 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641170 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641222 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641240 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641263 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.641280 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.642443 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:42:49.852449854 +0000 UTC Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744911 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744945 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744953 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744969 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.744978 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.799119 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.799093454 podStartE2EDuration="1m11.799093454s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.798663834 +0000 UTC m=+90.466433422" watchObservedRunningTime="2026-01-21 15:48:39.799093454 +0000 UTC m=+90.466863042" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.811991 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=17.8119692 podStartE2EDuration="17.8119692s" podCreationTimestamp="2026-01-21 15:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.811827947 +0000 UTC m=+90.479597575" watchObservedRunningTime="2026-01-21 15:48:39.8119692 +0000 UTC m=+90.479738778" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847660 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847706 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847717 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847734 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.847746 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.859536 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4g84s" podStartSLOduration=72.859508317 podStartE2EDuration="1m12.859508317s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.845540087 +0000 UTC m=+90.513309665" watchObservedRunningTime="2026-01-21 15:48:39.859508317 +0000 UTC m=+90.527277895" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.859919 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-nqxc7" podStartSLOduration=72.859913426 podStartE2EDuration="1m12.859913426s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.859770363 +0000 UTC m=+90.527539961" watchObservedRunningTime="2026-01-21 15:48:39.859913426 +0000 UTC m=+90.527683004" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.875072 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.875045603 podStartE2EDuration="1m12.875045603s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.875032393 +0000 UTC m=+90.542801991" watchObservedRunningTime="2026-01-21 15:48:39.875045603 +0000 UTC m=+90.542815181" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.905692 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dx99k" podStartSLOduration=71.905669864 podStartE2EDuration="1m11.905669864s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.905085981 +0000 UTC m=+90.572855549" watchObservedRunningTime="2026-01-21 15:48:39.905669864 +0000 UTC m=+90.573439442" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.919573 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podStartSLOduration=72.919549393 podStartE2EDuration="1m12.919549393s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.918463649 +0000 UTC m=+90.586233227" watchObservedRunningTime="2026-01-21 15:48:39.919549393 +0000 UTC m=+90.587318991" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.946187 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lkblz" podStartSLOduration=71.946160575 podStartE2EDuration="1m11.946160575s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.945610422 +0000 UTC m=+90.613380000" watchObservedRunningTime="2026-01-21 15:48:39.946160575 +0000 UTC m=+90.613930163" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950189 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950243 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950255 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950271 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.950285 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:39Z","lastTransitionTime":"2026-01-21T15:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.973467 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k8rxc" podStartSLOduration=71.973441411 podStartE2EDuration="1m11.973441411s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.958956089 +0000 UTC m=+90.626725667" watchObservedRunningTime="2026-01-21 15:48:39.973441411 +0000 UTC m=+90.641210989" Jan 21 15:48:39 crc kubenswrapper[4760]: I0121 15:48:39.999182 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.999163333 podStartE2EDuration="1m12.999163333s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:39.998797495 +0000 UTC m=+90.666567073" watchObservedRunningTime="2026-01-21 15:48:39.999163333 +0000 UTC m=+90.666932911" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.015158 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=41.015133449 podStartE2EDuration="41.015133449s" podCreationTimestamp="2026-01-21 15:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:40.013921502 +0000 UTC m=+90.681691080" watchObservedRunningTime="2026-01-21 15:48:40.015133449 +0000 UTC m=+90.682903037" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052001 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052046 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052058 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052077 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.052090 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:40Z","lastTransitionTime":"2026-01-21T15:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154284 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154389 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154412 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154438 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.154456 4760 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T15:48:40Z","lastTransitionTime":"2026-01-21T15:48:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.203991 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd"] Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.204602 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.208757 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.208802 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.208831 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.209170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266183 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/863db27d-8f26-459c-9883-6bf396943880-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266295 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863db27d-8f26-459c-9883-6bf396943880-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.266447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863db27d-8f26-459c-9883-6bf396943880-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368566 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/863db27d-8f26-459c-9883-6bf396943880-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368692 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863db27d-8f26-459c-9883-6bf396943880-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863db27d-8f26-459c-9883-6bf396943880-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368854 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.368996 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/863db27d-8f26-459c-9883-6bf396943880-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.370536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/863db27d-8f26-459c-9883-6bf396943880-service-ca\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.383600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863db27d-8f26-459c-9883-6bf396943880-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.385864 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863db27d-8f26-459c-9883-6bf396943880-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-jzfhd\" (UID: \"863db27d-8f26-459c-9883-6bf396943880\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.519127 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.621930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.622026 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:40 crc kubenswrapper[4760]: E0121 15:48:40.622435 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:40 crc kubenswrapper[4760]: E0121 15:48:40.622589 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.643124 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:08:37.240818632 +0000 UTC Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.643195 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 15:48:40 crc kubenswrapper[4760]: I0121 15:48:40.651097 4760 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.307664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" event={"ID":"863db27d-8f26-459c-9883-6bf396943880","Type":"ContainerStarted","Data":"66eba3e615888a06cfb62fa6b42bbc05c2c492793973c2ac4fdb753b625a3bee"} Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.307759 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" event={"ID":"863db27d-8f26-459c-9883-6bf396943880","Type":"ContainerStarted","Data":"ea4eb40c9b7437d283ef4c180f556b7b130686eb1a9ece27af1cf5e66b400089"} Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.621996 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:41 crc kubenswrapper[4760]: I0121 15:48:41.622030 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:41 crc kubenswrapper[4760]: E0121 15:48:41.622144 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:41 crc kubenswrapper[4760]: E0121 15:48:41.622364 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:42 crc kubenswrapper[4760]: I0121 15:48:42.622204 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:42 crc kubenswrapper[4760]: E0121 15:48:42.622382 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:42 crc kubenswrapper[4760]: I0121 15:48:42.622225 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:42 crc kubenswrapper[4760]: E0121 15:48:42.622500 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:43 crc kubenswrapper[4760]: I0121 15:48:43.621878 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:43 crc kubenswrapper[4760]: I0121 15:48:43.621929 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:43 crc kubenswrapper[4760]: E0121 15:48:43.622062 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:43 crc kubenswrapper[4760]: E0121 15:48:43.622195 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:44 crc kubenswrapper[4760]: I0121 15:48:44.622970 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:44 crc kubenswrapper[4760]: I0121 15:48:44.623098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:44 crc kubenswrapper[4760]: E0121 15:48:44.623642 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:44 crc kubenswrapper[4760]: E0121 15:48:44.623739 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:44 crc kubenswrapper[4760]: I0121 15:48:44.624389 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:44 crc kubenswrapper[4760]: E0121 15:48:44.624577 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:45 crc kubenswrapper[4760]: I0121 15:48:45.622389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:45 crc kubenswrapper[4760]: I0121 15:48:45.622491 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:45 crc kubenswrapper[4760]: E0121 15:48:45.622897 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:45 crc kubenswrapper[4760]: E0121 15:48:45.623077 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:46 crc kubenswrapper[4760]: I0121 15:48:46.622067 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:46 crc kubenswrapper[4760]: I0121 15:48:46.622090 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.622256 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.622527 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:46 crc kubenswrapper[4760]: I0121 15:48:46.744176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.744402 4760 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:46 crc kubenswrapper[4760]: E0121 15:48:46.744528 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs podName:0a4b6476-7a89-41b4-b918-5628f622c7c1 nodeName:}" failed. No retries permitted until 2026-01-21 15:49:50.744497822 +0000 UTC m=+161.412267400 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs") pod "network-metrics-daemon-bbr8l" (UID: "0a4b6476-7a89-41b4-b918-5628f622c7c1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 15:48:47 crc kubenswrapper[4760]: I0121 15:48:47.622438 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:47 crc kubenswrapper[4760]: E0121 15:48:47.622627 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:47 crc kubenswrapper[4760]: I0121 15:48:47.622698 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:47 crc kubenswrapper[4760]: E0121 15:48:47.622876 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:48 crc kubenswrapper[4760]: I0121 15:48:48.622241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:48 crc kubenswrapper[4760]: I0121 15:48:48.622436 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:48 crc kubenswrapper[4760]: E0121 15:48:48.622818 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:48 crc kubenswrapper[4760]: E0121 15:48:48.623202 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:49 crc kubenswrapper[4760]: I0121 15:48:49.622023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:49 crc kubenswrapper[4760]: I0121 15:48:49.622028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:49 crc kubenswrapper[4760]: E0121 15:48:49.624435 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:49 crc kubenswrapper[4760]: E0121 15:48:49.624566 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:50 crc kubenswrapper[4760]: I0121 15:48:50.621969 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:50 crc kubenswrapper[4760]: I0121 15:48:50.622083 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:50 crc kubenswrapper[4760]: E0121 15:48:50.622246 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:50 crc kubenswrapper[4760]: E0121 15:48:50.622377 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:51 crc kubenswrapper[4760]: I0121 15:48:51.621728 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:51 crc kubenswrapper[4760]: I0121 15:48:51.621864 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:51 crc kubenswrapper[4760]: E0121 15:48:51.621916 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:51 crc kubenswrapper[4760]: E0121 15:48:51.622040 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:52 crc kubenswrapper[4760]: I0121 15:48:52.622303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:52 crc kubenswrapper[4760]: E0121 15:48:52.622526 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:52 crc kubenswrapper[4760]: I0121 15:48:52.622792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:52 crc kubenswrapper[4760]: E0121 15:48:52.622890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:53 crc kubenswrapper[4760]: I0121 15:48:53.621562 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:53 crc kubenswrapper[4760]: I0121 15:48:53.621727 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:53 crc kubenswrapper[4760]: E0121 15:48:53.621763 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:53 crc kubenswrapper[4760]: E0121 15:48:53.621921 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:54 crc kubenswrapper[4760]: I0121 15:48:54.621706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:54 crc kubenswrapper[4760]: I0121 15:48:54.621706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:54 crc kubenswrapper[4760]: E0121 15:48:54.621864 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:54 crc kubenswrapper[4760]: E0121 15:48:54.621956 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:55 crc kubenswrapper[4760]: I0121 15:48:55.622132 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:55 crc kubenswrapper[4760]: I0121 15:48:55.622277 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:55 crc kubenswrapper[4760]: E0121 15:48:55.622363 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:55 crc kubenswrapper[4760]: E0121 15:48:55.622552 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:56 crc kubenswrapper[4760]: I0121 15:48:56.621833 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:56 crc kubenswrapper[4760]: I0121 15:48:56.622014 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:56 crc kubenswrapper[4760]: E0121 15:48:56.622151 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:56 crc kubenswrapper[4760]: E0121 15:48:56.622466 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:57 crc kubenswrapper[4760]: I0121 15:48:57.622027 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:57 crc kubenswrapper[4760]: E0121 15:48:57.622241 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:57 crc kubenswrapper[4760]: I0121 15:48:57.622422 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:57 crc kubenswrapper[4760]: E0121 15:48:57.622607 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:48:58 crc kubenswrapper[4760]: I0121 15:48:58.622303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:48:58 crc kubenswrapper[4760]: I0121 15:48:58.622471 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:48:58 crc kubenswrapper[4760]: E0121 15:48:58.622514 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:48:58 crc kubenswrapper[4760]: E0121 15:48:58.623221 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:48:58 crc kubenswrapper[4760]: I0121 15:48:58.624495 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:48:58 crc kubenswrapper[4760]: E0121 15:48:58.624783 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-gfprm_openshift-ovn-kubernetes(aa19ef03-9cda-4ae5-b47c-4a3bac73dc49)\"" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" Jan 21 15:48:59 crc kubenswrapper[4760]: I0121 15:48:59.621905 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:48:59 crc kubenswrapper[4760]: I0121 15:48:59.621916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:48:59 crc kubenswrapper[4760]: E0121 15:48:59.624972 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:48:59 crc kubenswrapper[4760]: E0121 15:48:59.625167 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:00 crc kubenswrapper[4760]: I0121 15:49:00.622562 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:00 crc kubenswrapper[4760]: I0121 15:49:00.622654 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:00 crc kubenswrapper[4760]: E0121 15:49:00.622751 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:00 crc kubenswrapper[4760]: E0121 15:49:00.622840 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:01 crc kubenswrapper[4760]: I0121 15:49:01.622041 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:01 crc kubenswrapper[4760]: I0121 15:49:01.622106 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:01 crc kubenswrapper[4760]: E0121 15:49:01.622977 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:01 crc kubenswrapper[4760]: E0121 15:49:01.623318 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.574999 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575560 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/0.log" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575648 4760 generic.go:334] "Generic (PLEG): container finished" podID="7300c51f-415f-4696-bda1-a9e79ae5704a" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" exitCode=1 Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575730 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerDied","Data":"293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5"} Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.575811 4760 scope.go:117] "RemoveContainer" containerID="55601d2bed9e40ceb1dc0d4796864b9db8b5e7f324aa5cf23340d953f9a6eba6" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.576301 4760 scope.go:117] "RemoveContainer" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" Jan 21 15:49:02 crc kubenswrapper[4760]: E0121 15:49:02.576575 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dx99k_openshift-multus(7300c51f-415f-4696-bda1-a9e79ae5704a)\"" pod="openshift-multus/multus-dx99k" podUID="7300c51f-415f-4696-bda1-a9e79ae5704a" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.599644 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-jzfhd" podStartSLOduration=94.599621684 podStartE2EDuration="1m34.599621684s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:48:41.334782478 +0000 UTC m=+92.002552066" watchObservedRunningTime="2026-01-21 15:49:02.599621684 +0000 UTC m=+113.267391252" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.622447 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:02 crc kubenswrapper[4760]: I0121 15:49:02.622523 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:02 crc kubenswrapper[4760]: E0121 15:49:02.623128 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:02 crc kubenswrapper[4760]: E0121 15:49:02.623361 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:03 crc kubenswrapper[4760]: I0121 15:49:03.580185 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:49:03 crc kubenswrapper[4760]: I0121 15:49:03.621618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:03 crc kubenswrapper[4760]: E0121 15:49:03.621786 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:03 crc kubenswrapper[4760]: I0121 15:49:03.621618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:03 crc kubenswrapper[4760]: E0121 15:49:03.621918 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:04 crc kubenswrapper[4760]: I0121 15:49:04.622405 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:04 crc kubenswrapper[4760]: I0121 15:49:04.622390 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:04 crc kubenswrapper[4760]: E0121 15:49:04.622572 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:04 crc kubenswrapper[4760]: E0121 15:49:04.622764 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:05 crc kubenswrapper[4760]: I0121 15:49:05.622038 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:05 crc kubenswrapper[4760]: I0121 15:49:05.622201 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:05 crc kubenswrapper[4760]: E0121 15:49:05.622422 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:05 crc kubenswrapper[4760]: E0121 15:49:05.622544 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:06 crc kubenswrapper[4760]: I0121 15:49:06.622235 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:06 crc kubenswrapper[4760]: I0121 15:49:06.622267 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:06 crc kubenswrapper[4760]: E0121 15:49:06.622466 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:06 crc kubenswrapper[4760]: E0121 15:49:06.622649 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:07 crc kubenswrapper[4760]: I0121 15:49:07.621957 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:07 crc kubenswrapper[4760]: I0121 15:49:07.622023 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:07 crc kubenswrapper[4760]: E0121 15:49:07.622163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:07 crc kubenswrapper[4760]: E0121 15:49:07.622280 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:08 crc kubenswrapper[4760]: I0121 15:49:08.621811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:08 crc kubenswrapper[4760]: I0121 15:49:08.621953 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:08 crc kubenswrapper[4760]: E0121 15:49:08.622243 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:08 crc kubenswrapper[4760]: E0121 15:49:08.622300 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.576264 4760 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 15:49:09 crc kubenswrapper[4760]: I0121 15:49:09.623870 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:09 crc kubenswrapper[4760]: I0121 15:49:09.623915 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.627126 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.627687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:09 crc kubenswrapper[4760]: E0121 15:49:09.745627 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:49:10 crc kubenswrapper[4760]: I0121 15:49:10.621677 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:10 crc kubenswrapper[4760]: E0121 15:49:10.621830 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:10 crc kubenswrapper[4760]: I0121 15:49:10.622055 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:10 crc kubenswrapper[4760]: E0121 15:49:10.622193 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:11 crc kubenswrapper[4760]: I0121 15:49:11.621948 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:11 crc kubenswrapper[4760]: I0121 15:49:11.622087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:11 crc kubenswrapper[4760]: E0121 15:49:11.622124 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:11 crc kubenswrapper[4760]: E0121 15:49:11.622292 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:12 crc kubenswrapper[4760]: I0121 15:49:12.622130 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:12 crc kubenswrapper[4760]: I0121 15:49:12.622232 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:12 crc kubenswrapper[4760]: E0121 15:49:12.622301 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:12 crc kubenswrapper[4760]: E0121 15:49:12.622426 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:13 crc kubenswrapper[4760]: I0121 15:49:13.622380 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:13 crc kubenswrapper[4760]: E0121 15:49:13.622632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:13 crc kubenswrapper[4760]: I0121 15:49:13.623169 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:13 crc kubenswrapper[4760]: E0121 15:49:13.623362 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:13 crc kubenswrapper[4760]: I0121 15:49:13.623606 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.375636 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bbr8l"] Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.375902 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:14 crc kubenswrapper[4760]: E0121 15:49:14.376134 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.619654 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.621406 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:14 crc kubenswrapper[4760]: E0121 15:49:14.621543 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.622293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerStarted","Data":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.622688 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:49:14 crc kubenswrapper[4760]: I0121 15:49:14.658220 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podStartSLOduration=106.658202699 podStartE2EDuration="1m46.658202699s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:14.657915142 +0000 UTC m=+125.325684740" watchObservedRunningTime="2026-01-21 15:49:14.658202699 +0000 UTC m=+125.325972277" Jan 21 15:49:14 crc kubenswrapper[4760]: E0121 15:49:14.747800 4760 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 15:49:15 crc kubenswrapper[4760]: I0121 15:49:15.622097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:15 crc kubenswrapper[4760]: I0121 15:49:15.622123 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:15 crc kubenswrapper[4760]: E0121 15:49:15.622366 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:15 crc kubenswrapper[4760]: E0121 15:49:15.622521 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:15 crc kubenswrapper[4760]: I0121 15:49:15.622793 4760 scope.go:117] "RemoveContainer" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.622401 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.622495 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:16 crc kubenswrapper[4760]: E0121 15:49:16.622557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:16 crc kubenswrapper[4760]: E0121 15:49:16.622691 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.631157 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:49:16 crc kubenswrapper[4760]: I0121 15:49:16.631236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505"} Jan 21 15:49:17 crc kubenswrapper[4760]: I0121 15:49:17.622059 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:17 crc kubenswrapper[4760]: I0121 15:49:17.622148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:17 crc kubenswrapper[4760]: E0121 15:49:17.622874 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:17 crc kubenswrapper[4760]: E0121 15:49:17.623038 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:18 crc kubenswrapper[4760]: I0121 15:49:18.622082 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:18 crc kubenswrapper[4760]: I0121 15:49:18.622082 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:18 crc kubenswrapper[4760]: E0121 15:49:18.623772 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bbr8l" podUID="0a4b6476-7a89-41b4-b918-5628f622c7c1" Jan 21 15:49:18 crc kubenswrapper[4760]: E0121 15:49:18.623992 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 15:49:19 crc kubenswrapper[4760]: I0121 15:49:19.621848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:19 crc kubenswrapper[4760]: I0121 15:49:19.621848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:19 crc kubenswrapper[4760]: E0121 15:49:19.623270 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 15:49:19 crc kubenswrapper[4760]: E0121 15:49:19.623425 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.621819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.621819 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.625757 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.625991 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.626063 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.626106 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.781377 4760 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.833036 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jx6dn"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.833609 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838241 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838879 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838960 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.838904 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.839418 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.840411 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.850093 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.850554 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.850822 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.851693 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.855608 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.855985 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.856278 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.856728 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.857160 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.857444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.857706 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.858367 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859123 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859160 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-audit\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-serving-cert\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-image-import-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859260 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkpr\" (UniqueName: \"kubernetes.io/projected/76967888-2735-467c-a288-a7bfe13f5690-kube-api-access-ckkpr\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-etcd-serving-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-audit-dir\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-node-pullsecrets\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859380 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-etcd-client\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.859400 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-encryption-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.860565 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.862109 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.862975 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.863443 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.866770 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.866806 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867121 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867451 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867517 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867559 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867517 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867836 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867940 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.867998 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868015 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868062 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.868185 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.869473 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.871282 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875080 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875185 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875236 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875302 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875354 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875482 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875629 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875622 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.875843 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.876128 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.876172 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.876789 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.877856 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.878662 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.879141 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.879499 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.879733 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.880610 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.881119 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.883103 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkwlz"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.883777 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.907059 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.909878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910053 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910393 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910671 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.910996 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911142 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911222 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.911492 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.912613 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.913574 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.913715 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.914051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.914265 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.914815 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.915547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.915757 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.915968 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916012 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916114 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916241 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916446 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916688 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.916914 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.919303 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.921545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.922529 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.923398 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.924955 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gqcsb"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.925667 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.926094 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.926873 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.927215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.927464 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq7q6"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.927583 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.928005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.930908 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.931609 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.931653 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.931723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.932959 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.934874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.935017 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.936282 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.948065 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.948969 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.949523 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.970286 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.970492 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.970599 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.971054 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.972312 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8jb7f"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.973849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.973951 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc9ph\" (UniqueName: \"kubernetes.io/projected/1096cfed-6553-45e5-927a-5169e506e758-kube-api-access-vc9ph\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.973993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-trusted-ca\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974037 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c20a1b-2941-4f3e-937d-8629dc663dd2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974142 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1096cfed-6553-45e5-927a-5169e506e758-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974564 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1affbee-c661-46d6-89cd-08977e347d3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974656 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggnkt\" (UniqueName: \"kubernetes.io/projected/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-kube-api-access-ggnkt\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c28m\" (UniqueName: \"kubernetes.io/projected/c6d4e7cb-581f-4404-b64f-03fb526edeaf-kube-api-access-9c28m\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-audit-dir\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99t9\" (UniqueName: \"kubernetes.io/projected/e527c232-4f49-4920-a0cc-403df50c3f9c-kube-api-access-l99t9\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/258aeabf-45e1-4b66-bec4-1c7f834e2b77-serving-cert\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974892 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974960 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.974993 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.975827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-audit-dir\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.975961 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.976684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.975005 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978111 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978156 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978210 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9xl\" (UniqueName: \"kubernetes.io/projected/90c20a1b-2941-4f3e-937d-8629dc663dd2-kube-api-access-nc9xl\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978241 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1affbee-c661-46d6-89cd-08977e347d3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978270 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978307 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-images\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-encryption-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978397 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978402 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45549dc9-0155-4d34-927c-25c5fb82872b-audit-dir\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978526 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-config\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bfvn\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-kube-api-access-2bfvn\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978705 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-serving-cert\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978794 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07d98bec-099c-43a6-aa43-a96450505b5b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978878 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1affbee-c661-46d6-89cd-08977e347d3c-config\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978916 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-image-import-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.978983 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-auth-proxy-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979004 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979038 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979199 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979377 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979710 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.979935 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816d3ef0-0471-4ee0-998b-947d78f8d3f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07d98bec-099c-43a6-aa43-a96450505b5b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkpr\" (UniqueName: \"kubernetes.io/projected/76967888-2735-467c-a288-a7bfe13f5690-kube-api-access-ckkpr\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980104 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7b9\" (UniqueName: \"kubernetes.io/projected/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-kube-api-access-ks7b9\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-encryption-config\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980352 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/816d3ef0-0471-4ee0-998b-947d78f8d3f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvl8t\" (UniqueName: \"kubernetes.io/projected/45549dc9-0155-4d34-927c-25c5fb82872b-kube-api-access-qvl8t\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980475 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-etcd-serving-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980517 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qssz\" (UniqueName: \"kubernetes.io/projected/816d3ef0-0471-4ee0-998b-947d78f8d3f3-kube-api-access-8qssz\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.980627 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-serving-cert\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-etcd-serving-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981698 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e527c232-4f49-4920-a0cc-403df50c3f9c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981831 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6d4e7cb-581f-4404-b64f-03fb526edeaf-machine-approver-tls\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.981912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-node-pullsecrets\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.982094 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-etcd-client\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.982149 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c20a1b-2941-4f3e-937d-8629dc663dd2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.982263 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/76967888-2735-467c-a288-a7bfe13f5690-node-pullsecrets\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983264 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-config\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-proxy-tls\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983372 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-audit-policies\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983579 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983649 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983722 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52j57\" (UniqueName: \"kubernetes.io/projected/5a1a3d13-a380-46b2-afd6-c8f5dc864f39-kube-api-access-52j57\") pod \"downloads-7954f5f757-gqcsb\" (UID: \"5a1a3d13-a380-46b2-afd6-c8f5dc864f39\") " pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-audit\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983880 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-serving-cert\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983899 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.983915 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984158 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984378 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984529 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984633 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984768 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.984906 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.989185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.989361 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm"] Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.989474 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.990302 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-image-import-ca\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.990656 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.991046 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.994457 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-etcd-client\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.996483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-encryption-config\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.996934 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:49:20 crc kubenswrapper[4760]: I0121 15:49:20.997499 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-gfd2m"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1096cfed-6553-45e5-927a-5169e506e758-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drp8\" (UniqueName: \"kubernetes.io/projected/258aeabf-45e1-4b66-bec4-1c7f834e2b77-kube-api-access-7drp8\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.002976 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-client\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016022 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-74x59"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016080 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016272 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016753 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.016923 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76967888-2735-467c-a288-a7bfe13f5690-serving-cert\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017796 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017855 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.017937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.018138 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.018800 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.022512 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.022591 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.023177 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.023265 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.023380 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024102 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/76967888-2735-467c-a288-a7bfe13f5690-audit\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024612 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024862 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024924 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.024986 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.025288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.025607 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.026149 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.027662 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.027802 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.030201 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.032869 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4x9fq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.035358 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037131 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037275 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037403 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.037933 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.038236 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6cjk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.038745 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.040069 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.042243 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.046629 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.047291 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.047765 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.048183 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.048765 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.049678 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.049914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jx6dn"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.050768 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5p8jw"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.051705 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.052154 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.052511 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.052747 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.054242 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.054815 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.055742 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.056887 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.058167 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.058953 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-sztm4"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.060012 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.060163 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.061210 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.065010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq7q6"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.067259 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkwlz"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.068534 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.071477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gqcsb"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.072472 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.074487 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.075702 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.079704 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.081175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.082877 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.084356 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.084393 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.085761 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.087020 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.088221 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xq6c8"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.089064 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.089275 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4t9n"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.090303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.090780 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.092142 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.093209 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.094495 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.095438 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.096044 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-74x59"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.097122 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8jb7f"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.098506 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.099763 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6cjk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.100881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.102015 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.103560 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.104792 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.105867 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.107080 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sztm4"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.108856 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xq6c8"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.110070 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5p8jw"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.111223 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4t9n"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.112259 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4x9fq"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.113169 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-r8q6j"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.114000 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.115866 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-proxy-tls\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118900 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e527c232-4f49-4920-a0cc-403df50c3f9c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118929 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118952 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118972 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfs6b\" (UniqueName: \"kubernetes.io/projected/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-kube-api-access-lfs6b\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.118998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c20a1b-2941-4f3e-937d-8629dc663dd2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6d4e7cb-581f-4404-b64f-03fb526edeaf-machine-approver-tls\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119046 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-config\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119076 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-proxy-tls\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119095 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-audit-policies\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119115 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119160 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119215 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-images\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119277 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-serving-cert\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52j57\" (UniqueName: \"kubernetes.io/projected/5a1a3d13-a380-46b2-afd6-c8f5dc864f39-kube-api-access-52j57\") pod \"downloads-7954f5f757-gqcsb\" (UID: \"5a1a3d13-a380-46b2-afd6-c8f5dc864f39\") " pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119364 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-serving-cert\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1096cfed-6553-45e5-927a-5169e506e758-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119414 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7drp8\" (UniqueName: \"kubernetes.io/projected/258aeabf-45e1-4b66-bec4-1c7f834e2b77-kube-api-access-7drp8\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119430 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-client\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119486 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc9ph\" (UniqueName: \"kubernetes.io/projected/1096cfed-6553-45e5-927a-5169e506e758-kube-api-access-vc9ph\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119502 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-trusted-ca\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c20a1b-2941-4f3e-937d-8629dc663dd2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119534 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119572 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-service-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wm8h\" (UniqueName: \"kubernetes.io/projected/3671d10c-81c6-4c7f-9117-1c237e4efe51-kube-api-access-4wm8h\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119636 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1096cfed-6553-45e5-927a-5169e506e758-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119714 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1affbee-c661-46d6-89cd-08977e347d3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119758 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119938 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-audit-policies\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120144 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.119768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120222 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggnkt\" (UniqueName: \"kubernetes.io/projected/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-kube-api-access-ggnkt\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-client\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99t9\" (UniqueName: \"kubernetes.io/projected/e527c232-4f49-4920-a0cc-403df50c3f9c-kube-api-access-l99t9\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120316 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/258aeabf-45e1-4b66-bec4-1c7f834e2b77-serving-cert\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120375 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c28m\" (UniqueName: \"kubernetes.io/projected/c6d4e7cb-581f-4404-b64f-03fb526edeaf-kube-api-access-9c28m\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120420 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120436 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120467 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120567 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-config\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.121571 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.121603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.122213 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.122985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c6d4e7cb-581f-4404-b64f-03fb526edeaf-machine-approver-tls\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.120471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9xl\" (UniqueName: \"kubernetes.io/projected/90c20a1b-2941-4f3e-937d-8629dc663dd2-kube-api-access-nc9xl\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123172 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3671d10c-81c6-4c7f-9117-1c237e4efe51-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1affbee-c661-46d6-89cd-08977e347d3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6tkf\" (UniqueName: \"kubernetes.io/projected/41967b98-5ae8-45a6-8ec2-1be35218fa5f-kube-api-access-h6tkf\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-images\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45549dc9-0155-4d34-927c-25c5fb82872b-audit-dir\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-config\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bfvn\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-kube-api-access-2bfvn\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07d98bec-099c-43a6-aa43-a96450505b5b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1affbee-c661-46d6-89cd-08977e347d3c-config\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-config\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123667 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-auth-proxy-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90c20a1b-2941-4f3e-937d-8629dc663dd2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.123990 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1096cfed-6553-45e5-927a-5169e506e758-config\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124932 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90c20a1b-2941-4f3e-937d-8629dc663dd2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.124931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.125202 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-proxy-tls\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.125402 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.125516 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-images\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126270 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-trusted-ca\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/45549dc9-0155-4d34-927c-25c5fb82872b-audit-dir\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.126908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1affbee-c661-46d6-89cd-08977e347d3c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127287 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127474 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-auth-proxy-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6d4e7cb-581f-4404-b64f-03fb526edeaf-config\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.127927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/258aeabf-45e1-4b66-bec4-1c7f834e2b77-config\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128197 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816d3ef0-0471-4ee0-998b-947d78f8d3f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1affbee-c661-46d6-89cd-08977e347d3c-config\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128542 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07d98bec-099c-43a6-aa43-a96450505b5b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128655 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7b9\" (UniqueName: \"kubernetes.io/projected/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-kube-api-access-ks7b9\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.128827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-encryption-config\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129378 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07d98bec-099c-43a6-aa43-a96450505b5b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/816d3ef0-0471-4ee0-998b-947d78f8d3f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129542 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvl8t\" (UniqueName: \"kubernetes.io/projected/45549dc9-0155-4d34-927c-25c5fb82872b-kube-api-access-qvl8t\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129606 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/07d98bec-099c-43a6-aa43-a96450505b5b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/816d3ef0-0471-4ee0-998b-947d78f8d3f3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qssz\" (UniqueName: \"kubernetes.io/projected/816d3ef0-0471-4ee0-998b-947d78f8d3f3-kube-api-access-8qssz\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129849 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129849 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1096cfed-6553-45e5-927a-5169e506e758-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.129955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-client\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-serving-cert\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130100 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-config\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/45549dc9-0155-4d34-927c-25c5fb82872b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.130998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/816d3ef0-0471-4ee0-998b-947d78f8d3f3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.131411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-encryption-config\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.132032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/258aeabf-45e1-4b66-bec4-1c7f834e2b77-serving-cert\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.132405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45549dc9-0155-4d34-927c-25c5fb82872b-serving-cert\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.132946 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.133271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e527c232-4f49-4920-a0cc-403df50c3f9c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.137777 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.138013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-serving-cert\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.172219 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkpr\" (UniqueName: \"kubernetes.io/projected/76967888-2735-467c-a288-a7bfe13f5690-kube-api-access-ckkpr\") pod \"apiserver-76f77b778f-jx6dn\" (UID: \"76967888-2735-467c-a288-a7bfe13f5690\") " pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.196039 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.215808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfs6b\" (UniqueName: \"kubernetes.io/projected/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-kube-api-access-lfs6b\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-images\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-serving-cert\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231307 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-service-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231341 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wm8h\" (UniqueName: \"kubernetes.io/projected/3671d10c-81c6-4c7f-9117-1c237e4efe51-kube-api-access-4wm8h\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231382 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-client\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3671d10c-81c6-4c7f-9117-1c237e4efe51-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231446 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6tkf\" (UniqueName: \"kubernetes.io/projected/41967b98-5ae8-45a6-8ec2-1be35218fa5f-kube-api-access-h6tkf\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-config\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-config\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231601 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-proxy-tls\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.231621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.232288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.234580 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-proxy-tls\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.236152 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.255292 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.275562 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.295740 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.316196 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.336167 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.355953 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.376396 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.396068 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.415232 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.436016 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.455962 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.464851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.480580 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.497137 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.516955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.535867 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.556503 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.587737 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.597259 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.615820 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.623508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.623531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.645699 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.656229 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.676005 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.682918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-service-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.696503 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.717310 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.726053 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-serving-cert\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.736647 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.744694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-client\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.757089 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.777154 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.797049 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.815998 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.826704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-config\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.837171 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.843189 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/41967b98-5ae8-45a6-8ec2-1be35218fa5f-etcd-ca\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.847457 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jx6dn"] Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.857126 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:49:21 crc kubenswrapper[4760]: W0121 15:49:21.860479 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76967888_2735_467c_a288_a7bfe13f5690.slice/crio-face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975 WatchSource:0}: Error finding container face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975: Status 404 returned error can't find the container with id face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975 Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.876388 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.896897 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.917039 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.936431 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.957510 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:49:21 crc kubenswrapper[4760]: I0121 15:49:21.976751 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.002051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.017002 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.034951 4760 request.go:700] Waited for 1.00904815s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.037584 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.057109 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.076227 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.096848 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.116217 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.136229 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.142446 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-config\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.156893 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.162893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3671d10c-81c6-4c7f-9117-1c237e4efe51-images\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.176747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.196985 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.207605 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3671d10c-81c6-4c7f-9117-1c237e4efe51-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.216659 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.235454 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.255748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.275439 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.296304 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.317340 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.336056 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.356797 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.376058 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.397682 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.436602 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.456633 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.477011 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.496890 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.516556 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.536526 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.556674 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.577629 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.598086 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.618106 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.636679 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.655713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerStarted","Data":"face6b97e6437955a61f7f7f0863982aed06121531564eb04fc8510cced02975"} Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.657224 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.675662 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.697874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.717950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.736780 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.755904 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.775710 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.797545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.818439 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.838597 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.858711 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.877398 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.896641 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.919701 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.956715 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"marketplace-operator-79b997595-fz22j\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.959906 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.983363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7drp8\" (UniqueName: \"kubernetes.io/projected/258aeabf-45e1-4b66-bec4-1c7f834e2b77-kube-api-access-7drp8\") pod \"console-operator-58897d9998-nq7q6\" (UID: \"258aeabf-45e1-4b66-bec4-1c7f834e2b77\") " pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:22 crc kubenswrapper[4760]: I0121 15:49:22.995277 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggnkt\" (UniqueName: \"kubernetes.io/projected/dd1ef85b-03cb-4332-98e6-bcc6d38933dd-kube-api-access-ggnkt\") pod \"authentication-operator-69f744f599-tkwlz\" (UID: \"dd1ef85b-03cb-4332-98e6-bcc6d38933dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.016940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99t9\" (UniqueName: \"kubernetes.io/projected/e527c232-4f49-4920-a0cc-403df50c3f9c-kube-api-access-l99t9\") pod \"cluster-samples-operator-665b6dd947-qdfkz\" (UID: \"e527c232-4f49-4920-a0cc-403df50c3f9c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.035178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"controller-manager-879f6c89f-s7vh9\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.054239 4760 request.go:700] Waited for 1.929701826s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.059171 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.071737 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9xl\" (UniqueName: \"kubernetes.io/projected/90c20a1b-2941-4f3e-937d-8629dc663dd2-kube-api-access-nc9xl\") pod \"openshift-controller-manager-operator-756b6f6bc6-94jxh\" (UID: \"90c20a1b-2941-4f3e-937d-8629dc663dd2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.095512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc9ph\" (UniqueName: \"kubernetes.io/projected/1096cfed-6553-45e5-927a-5169e506e758-kube-api-access-vc9ph\") pod \"openshift-apiserver-operator-796bbdcf4f-fg6vb\" (UID: \"1096cfed-6553-45e5-927a-5169e506e758\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.110858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52j57\" (UniqueName: \"kubernetes.io/projected/5a1a3d13-a380-46b2-afd6-c8f5dc864f39-kube-api-access-52j57\") pod \"downloads-7954f5f757-gqcsb\" (UID: \"5a1a3d13-a380-46b2-afd6-c8f5dc864f39\") " pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.126286 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.133989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"route-controller-manager-6576b87f9c-cxv6j\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.135266 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.140646 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.150405 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda34869a5_5ade_43ba_874a_487b308a13ca.slice/crio-7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb WatchSource:0}: Error finding container 7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb: Status 404 returned error can't find the container with id 7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.150626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.154186 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bfvn\" (UniqueName: \"kubernetes.io/projected/07d98bec-099c-43a6-aa43-a96450505b5b-kube-api-access-2bfvn\") pod \"cluster-image-registry-operator-dc59b4c8b-qmwgj\" (UID: \"07d98bec-099c-43a6-aa43-a96450505b5b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.169115 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.171421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d41f70b-9e7e-4e99-8fad-ad4a5a646df1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sqsn5\" (UID: \"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.185531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.193698 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c28m\" (UniqueName: \"kubernetes.io/projected/c6d4e7cb-581f-4404-b64f-03fb526edeaf-kube-api-access-9c28m\") pod \"machine-approver-56656f9798-vqm67\" (UID: \"c6d4e7cb-581f-4404-b64f-03fb526edeaf\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.198626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.207522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.214719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7b9\" (UniqueName: \"kubernetes.io/projected/a9e71cc4-57de-4f68-9b1f-ddf9ce2deace-kube-api-access-ks7b9\") pod \"machine-config-operator-74547568cd-w6lsk\" (UID: \"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.233628 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.233799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.257663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1affbee-c661-46d6-89cd-08977e347d3c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-hcqpm\" (UID: \"c1affbee-c661-46d6-89cd-08977e347d3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.258074 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvl8t\" (UniqueName: \"kubernetes.io/projected/45549dc9-0155-4d34-927c-25c5fb82872b-kube-api-access-qvl8t\") pod \"apiserver-7bbb656c7d-s9stx\" (UID: \"45549dc9-0155-4d34-927c-25c5fb82872b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.279928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qssz\" (UniqueName: \"kubernetes.io/projected/816d3ef0-0471-4ee0-998b-947d78f8d3f3-kube-api-access-8qssz\") pod \"openshift-config-operator-7777fb866f-z4gv8\" (UID: \"816d3ef0-0471-4ee0-998b-947d78f8d3f3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.295245 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.299085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"console-f9d7485db-clnlg\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.316938 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.331755 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.338451 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wm8h\" (UniqueName: \"kubernetes.io/projected/3671d10c-81c6-4c7f-9117-1c237e4efe51-kube-api-access-4wm8h\") pod \"machine-api-operator-5694c8668f-4x9fq\" (UID: \"3671d10c-81c6-4c7f-9117-1c237e4efe51\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.339555 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6d4e7cb_581f_4404_b64f_03fb526edeaf.slice/crio-e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d WatchSource:0}: Error finding container e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d: Status 404 returned error can't find the container with id e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.359638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.374789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.379395 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6tkf\" (UniqueName: \"kubernetes.io/projected/41967b98-5ae8-45a6-8ec2-1be35218fa5f-kube-api-access-h6tkf\") pod \"etcd-operator-b45778765-74x59\" (UID: \"41967b98-5ae8-45a6-8ec2-1be35218fa5f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.388435 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.388611 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.389447 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfs6b\" (UniqueName: \"kubernetes.io/projected/7eb59e83-0fc4-4e75-9ad8-7c5c10b40122-kube-api-access-lfs6b\") pod \"machine-config-controller-84d6567774-b6gcm\" (UID: \"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.397490 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.474952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475418 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475465 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475500 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475561 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475670 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-metrics-certs\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475713 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-default-certificate\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.475728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjd2\" (UniqueName: \"kubernetes.io/projected/26dbd752-d785-488a-879b-543307d0a4cd-kube-api-access-zdjd2\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481356 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e796ed4a-36e3-4630-9c37-3f5b49b6483d-config\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0df700c2-3091-4770-b404-cc81bc416387-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qp2k\" (UniqueName: \"kubernetes.io/projected/ea412328-1a05-4865-94c4-ab85c8694e6f-kube-api-access-2qp2k\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481596 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481627 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcrm\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-kube-api-access-tgcrm\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e796ed4a-36e3-4630-9c37-3f5b49b6483d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481789 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259e37c-8e0b-43ee-8164-320dffe1905d-metrics-tls\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481837 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.481919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrtm\" (UniqueName: \"kubernetes.io/projected/7ae6da0d-f707-4d3e-8625-cae54fe221d0-kube-api-access-fxrtm\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mndl\" (UniqueName: \"kubernetes.io/projected/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-kube-api-access-9mndl\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-webhook-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483576 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e796ed4a-36e3-4630-9c37-3f5b49b6483d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed95d17-1666-4ad0-afea-faa4a683ed81-service-ca-bundle\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483657 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxnjs\" (UniqueName: \"kubernetes.io/projected/a02e3350-da29-44d4-be95-ae71458cc1e2-kube-api-access-vxnjs\") pod \"migrator-59844c95c7-x7qrd\" (UID: \"a02e3350-da29-44d4-be95-ae71458cc1e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483698 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dbd752-d785-488a-879b-543307d0a4cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483743 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483786 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483824 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rb47\" (UniqueName: \"kubernetes.io/projected/bed95d17-1666-4ad0-afea-faa4a683ed81-kube-api-access-4rb47\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.483965 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5tx\" (UniqueName: \"kubernetes.io/projected/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-kube-api-access-2v5tx\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484038 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-srv-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484109 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqm6h\" (UniqueName: \"kubernetes.io/projected/0df700c2-3091-4770-b404-cc81bc416387-kube-api-access-kqm6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484421 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.484953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485031 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.485276 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:23.985262661 +0000 UTC m=+134.653032239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthl5\" (UniqueName: \"kubernetes.io/projected/b259e37c-8e0b-43ee-8164-320dffe1905d-kube-api-access-wthl5\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-profile-collector-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485711 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-srv-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485940 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.485978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-stats-auth\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486015 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbd752-d785-488a-879b-543307d0a4cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486071 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpdg\" (UniqueName: \"kubernetes.io/projected/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-kube-api-access-5lpdg\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486119 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ae6da0d-f707-4d3e-8625-cae54fe221d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486177 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.486220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea412328-1a05-4865-94c4-ab85c8694e6f-tmpfs\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.492024 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.587307 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.087266654 +0000 UTC m=+134.755036232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587444 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/326950d2-00f8-43d7-9cbd-6a337226d219-metrics-tls\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-certs\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587512 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-plugins-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587558 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrtm\" (UniqueName: \"kubernetes.io/projected/7ae6da0d-f707-4d3e-8625-cae54fe221d0-kube-api-access-fxrtm\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mndl\" (UniqueName: \"kubernetes.io/projected/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-kube-api-access-9mndl\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-webhook-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587637 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e796ed4a-36e3-4630-9c37-3f5b49b6483d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed95d17-1666-4ad0-afea-faa4a683ed81-service-ca-bundle\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587675 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxnjs\" (UniqueName: \"kubernetes.io/projected/a02e3350-da29-44d4-be95-ae71458cc1e2-kube-api-access-vxnjs\") pod \"migrator-59844c95c7-x7qrd\" (UID: \"a02e3350-da29-44d4-be95-ae71458cc1e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dbd752-d785-488a-879b-543307d0a4cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j88b7\" (UniqueName: \"kubernetes.io/projected/326950d2-00f8-43d7-9cbd-6a337226d219-kube-api-access-j88b7\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587821 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587851 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587886 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rb47\" (UniqueName: \"kubernetes.io/projected/bed95d17-1666-4ad0-afea-faa4a683ed81-kube-api-access-4rb47\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v5tx\" (UniqueName: \"kubernetes.io/projected/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-kube-api-access-2v5tx\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.587980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-srv-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqm6h\" (UniqueName: \"kubernetes.io/projected/0df700c2-3091-4770-b404-cc81bc416387-kube-api-access-kqm6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588061 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588129 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588251 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-csi-data-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588266 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-node-bootstrap-token\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wthl5\" (UniqueName: \"kubernetes.io/projected/b259e37c-8e0b-43ee-8164-320dffe1905d-kube-api-access-wthl5\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-profile-collector-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-srv-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588418 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-registration-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588468 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9080e6bd-e0c8-46c4-a267-9413c3e0b162-config\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588492 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588511 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-mountpoint-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-stats-auth\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588582 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-cert\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588617 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbd752-d785-488a-879b-543307d0a4cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpdg\" (UniqueName: \"kubernetes.io/projected/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-kube-api-access-5lpdg\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/326950d2-00f8-43d7-9cbd-6a337226d219-config-volume\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ae6da0d-f707-4d3e-8625-cae54fe221d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-cabundle\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588853 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea412328-1a05-4865-94c4-ab85c8694e6f-tmpfs\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rckjj\" (UniqueName: \"kubernetes.io/projected/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-kube-api-access-rckjj\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588920 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-key\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588968 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-socket-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.588987 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9080e6bd-e0c8-46c4-a267-9413c3e0b162-serving-cert\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589003 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbtj\" (UniqueName: \"kubernetes.io/projected/cf277e1b-785c-4657-b83d-2e402a3ce097-kube-api-access-fbbtj\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589074 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589215 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26dbd752-d785-488a-879b-543307d0a4cd-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bed95d17-1666-4ad0-afea-faa4a683ed81-service-ca-bundle\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589362 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvg7\" (UniqueName: \"kubernetes.io/projected/9080e6bd-e0c8-46c4-a267-9413c3e0b162-kube-api-access-sqvg7\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589405 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/73565c46-9349-48bf-9145-e59424ba78f6-kube-api-access-5kjbj\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589447 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589477 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589542 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589574 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-metrics-certs\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pl9\" (UniqueName: \"kubernetes.io/projected/c197ee1e-f79d-4867-8033-ba4b934a9f86-kube-api-access-v5pl9\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589661 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-default-certificate\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjd2\" (UniqueName: \"kubernetes.io/projected/26dbd752-d785-488a-879b-543307d0a4cd-kube-api-access-zdjd2\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e796ed4a-36e3-4630-9c37-3f5b49b6483d-config\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589793 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0df700c2-3091-4770-b404-cc81bc416387-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qp2k\" (UniqueName: \"kubernetes.io/projected/ea412328-1a05-4865-94c4-ab85c8694e6f-kube-api-access-2qp2k\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589847 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcrm\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-kube-api-access-tgcrm\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589928 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589958 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e796ed4a-36e3-4630-9c37-3f5b49b6483d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.589998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.590045 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259e37c-8e0b-43ee-8164-320dffe1905d-metrics-tls\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.590096 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.590355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.591877 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.592928 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.593214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.595675 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.095647818 +0000 UTC m=+134.763417456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.596189 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.596699 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.596722 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea412328-1a05-4865-94c4-ab85c8694e6f-tmpfs\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.597056 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.598227 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e796ed4a-36e3-4630-9c37-3f5b49b6483d-config\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.601357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.602547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.602929 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.603253 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.603440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.604044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.604948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-srv-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.605148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.608761 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea412328-1a05-4865-94c4-ab85c8694e6f-webhook-cert\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.609014 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.611991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.614941 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b259e37c-8e0b-43ee-8164-320dffe1905d-metrics-tls\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.615018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-srv-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.615235 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.615376 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.616135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26dbd752-d785-488a-879b-543307d0a4cd-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.616401 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-metrics-certs\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.616647 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-default-certificate\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.618145 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.618173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-profile-collector-cert\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619107 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619137 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.619907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7ae6da0d-f707-4d3e-8625-cae54fe221d0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.620184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.620975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e796ed4a-36e3-4630-9c37-3f5b49b6483d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.620993 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bed95d17-1666-4ad0-afea-faa4a683ed81-stats-auth\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.621145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.621720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0df700c2-3091-4770-b404-cc81bc416387-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.622025 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.622748 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.626864 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.638877 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxnjs\" (UniqueName: \"kubernetes.io/projected/a02e3350-da29-44d4-be95-ae71458cc1e2-kube-api-access-vxnjs\") pod \"migrator-59844c95c7-x7qrd\" (UID: \"a02e3350-da29-44d4-be95-ae71458cc1e2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.640931 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.667484 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.692481 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.693446 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.193400082 +0000 UTC m=+134.861169660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.693926 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-csi-data-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.695678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-csi-data-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.696581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-node-bootstrap-token\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.696679 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-registration-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-registration-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.696710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9080e6bd-e0c8-46c4-a267-9413c3e0b162-config\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-mountpoint-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-cert\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/326950d2-00f8-43d7-9cbd-6a337226d219-config-volume\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-cabundle\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rckjj\" (UniqueName: \"kubernetes.io/projected/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-kube-api-access-rckjj\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698582 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-key\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-socket-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698639 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbtj\" (UniqueName: \"kubernetes.io/projected/cf277e1b-785c-4657-b83d-2e402a3ce097-kube-api-access-fbbtj\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9080e6bd-e0c8-46c4-a267-9413c3e0b162-serving-cert\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698718 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/73565c46-9349-48bf-9145-e59424ba78f6-kube-api-access-5kjbj\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698747 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvg7\" (UniqueName: \"kubernetes.io/projected/9080e6bd-e0c8-46c4-a267-9413c3e0b162-kube-api-access-sqvg7\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5pl9\" (UniqueName: \"kubernetes.io/projected/c197ee1e-f79d-4867-8033-ba4b934a9f86-kube-api-access-v5pl9\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698947 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/326950d2-00f8-43d7-9cbd-6a337226d219-metrics-tls\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.698996 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-certs\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699045 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-plugins-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699121 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j88b7\" (UniqueName: \"kubernetes.io/projected/326950d2-00f8-43d7-9cbd-6a337226d219-kube-api-access-j88b7\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.699355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9080e6bd-e0c8-46c4-a267-9413c3e0b162-config\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.701013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-node-bootstrap-token\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.704506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-plugins-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.705881 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-cabundle\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.705964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-mountpoint-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.708392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.711127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"oauth-openshift-558db77b4-p467d\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.711225 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73565c46-9349-48bf-9145-e59424ba78f6-socket-dir\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.711517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/326950d2-00f8-43d7-9cbd-6a337226d219-config-volume\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.720992 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-cert\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.724395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerStarted","Data":"b426f095c9eeebbc73d791d28bf5018ba0416025738bc645f659c6cca9d8374b"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.729072 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c197ee1e-f79d-4867-8033-ba4b934a9f86-signing-key\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.730508 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.730544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9080e6bd-e0c8-46c4-a267-9413c3e0b162-serving-cert\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.734202 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.734288 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tkwlz"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.734385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cf277e1b-785c-4657-b83d-2e402a3ce097-certs\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.736832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/326950d2-00f8-43d7-9cbd-6a337226d219-metrics-tls\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.739906 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrtm\" (UniqueName: \"kubernetes.io/projected/7ae6da0d-f707-4d3e-8625-cae54fe221d0-kube-api-access-fxrtm\") pod \"multus-admission-controller-857f4d67dd-n6cjk\" (UID: \"7ae6da0d-f707-4d3e-8625-cae54fe221d0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.740123 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gqcsb"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.740112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.740992 4760 generic.go:334] "Generic (PLEG): container finished" podID="76967888-2735-467c-a288-a7bfe13f5690" containerID="29bceac94c6f2d3f676d5f45187e666fad72514c6813964bf9dcb2e0a9dec659" exitCode=0 Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.741692 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerDied","Data":"29bceac94c6f2d3f676d5f45187e666fad72514c6813964bf9dcb2e0a9dec659"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.747834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" event={"ID":"c6d4e7cb-581f-4404-b64f-03fb526edeaf","Type":"ContainerStarted","Data":"e4065edd4fcf6db1317a3f4d7d2952f34da071c586cb98ee227de7d65fae4f8d"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.750931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcrm\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-kube-api-access-tgcrm\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.751408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerStarted","Data":"a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.751458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerStarted","Data":"7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb"} Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.752037 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.757015 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fz22j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.757188 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.765316 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e796ed4a-36e3-4630-9c37-3f5b49b6483d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f2k2z\" (UID: \"e796ed4a-36e3-4630-9c37-3f5b49b6483d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.783721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqm6h\" (UniqueName: \"kubernetes.io/projected/0df700c2-3091-4770-b404-cc81bc416387-kube-api-access-kqm6h\") pod \"control-plane-machine-set-operator-78cbb6b69f-dm455\" (UID: \"0df700c2-3091-4770-b404-cc81bc416387\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.798228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wthl5\" (UniqueName: \"kubernetes.io/projected/b259e37c-8e0b-43ee-8164-320dffe1905d-kube-api-access-wthl5\") pod \"dns-operator-744455d44c-8jb7f\" (UID: \"b259e37c-8e0b-43ee-8164-320dffe1905d\") " pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.801259 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.801737 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.301717001 +0000 UTC m=+134.969486639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.814783 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mndl\" (UniqueName: \"kubernetes.io/projected/5537d4e7-89b1-40bb-b87e-d0d1c59840c5-kube-api-access-9mndl\") pod \"package-server-manager-789f6589d5-kvxcc\" (UID: \"5537d4e7-89b1-40bb-b87e-d0d1c59840c5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.853199 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.855905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rb47\" (UniqueName: \"kubernetes.io/projected/bed95d17-1666-4ad0-afea-faa4a683ed81-kube-api-access-4rb47\") pod \"router-default-5444994796-gfd2m\" (UID: \"bed95d17-1666-4ad0-afea-faa4a683ed81\") " pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.863861 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj"] Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.868566 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.880134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpdg\" (UniqueName: \"kubernetes.io/projected/ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8-kube-api-access-5lpdg\") pod \"catalog-operator-68c6474976-gnjlk\" (UID: \"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.886848 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v5tx\" (UniqueName: \"kubernetes.io/projected/c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a-kube-api-access-2v5tx\") pod \"olm-operator-6b444d44fb-mcpg9\" (UID: \"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.889827 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.902223 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:23 crc kubenswrapper[4760]: E0121 15:49:23.907125 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.407095587 +0000 UTC m=+135.074865165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.908098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.908961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zg9b5\" (UID: \"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.911044 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.915160 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a1a3d13_a380_46b2_afd6_c8f5dc864f39.slice/crio-847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293 WatchSource:0}: Error finding container 847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293: Status 404 returned error can't find the container with id 847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293 Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.916932 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qp2k\" (UniqueName: \"kubernetes.io/projected/ea412328-1a05-4865-94c4-ab85c8694e6f-kube-api-access-2qp2k\") pod \"packageserver-d55dfcdfc-5qwrq\" (UID: \"ea412328-1a05-4865-94c4-ab85c8694e6f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.952792 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.960409 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjd2\" (UniqueName: \"kubernetes.io/projected/26dbd752-d785-488a-879b-543307d0a4cd-kube-api-access-zdjd2\") pod \"kube-storage-version-migrator-operator-b67b599dd-8rjck\" (UID: \"26dbd752-d785-488a-879b-543307d0a4cd\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.968632 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.974486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" Jan 21 15:49:23 crc kubenswrapper[4760]: W0121 15:49:23.975626 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d41f70b_9e7e_4e99_8fad_ad4a5a646df1.slice/crio-ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34 WatchSource:0}: Error finding container ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34: Status 404 returned error can't find the container with id ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34 Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.975813 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"collect-profiles-29483505-mt5nl\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.985454 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:23 crc kubenswrapper[4760]: I0121 15:49:23.992018 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.000840 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.005290 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j88b7\" (UniqueName: \"kubernetes.io/projected/326950d2-00f8-43d7-9cbd-6a337226d219-kube-api-access-j88b7\") pod \"dns-default-sztm4\" (UID: \"326950d2-00f8-43d7-9cbd-6a337226d219\") " pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.008269 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.008669 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.508652561 +0000 UTC m=+135.176422139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.017780 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.017844 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.020615 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nq7q6"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.036224 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.038405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjbj\" (UniqueName: \"kubernetes.io/projected/73565c46-9349-48bf-9145-e59424ba78f6-kube-api-access-5kjbj\") pod \"csi-hostpathplugin-m4t9n\" (UID: \"73565c46-9349-48bf-9145-e59424ba78f6\") " pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.046879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvg7\" (UniqueName: \"kubernetes.io/projected/9080e6bd-e0c8-46c4-a267-9413c3e0b162-kube-api-access-sqvg7\") pod \"service-ca-operator-777779d784-rw8t2\" (UID: \"9080e6bd-e0c8-46c4-a267-9413c3e0b162\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.055939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5pl9\" (UniqueName: \"kubernetes.io/projected/c197ee1e-f79d-4867-8033-ba4b934a9f86-kube-api-access-v5pl9\") pod \"service-ca-9c57cc56f-5p8jw\" (UID: \"c197ee1e-f79d-4867-8033-ba4b934a9f86\") " pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.058080 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.080446 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rckjj\" (UniqueName: \"kubernetes.io/projected/20be687d-c18c-434b-9ccf-f6d2ec79e0f3-kube-api-access-rckjj\") pod \"ingress-canary-xq6c8\" (UID: \"20be687d-c18c-434b-9ccf-f6d2ec79e0f3\") " pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.093842 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbtj\" (UniqueName: \"kubernetes.io/projected/cf277e1b-785c-4657-b83d-2e402a3ce097-kube-api-access-fbbtj\") pod \"machine-config-server-r8q6j\" (UID: \"cf277e1b-785c-4657-b83d-2e402a3ce097\") " pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.109008 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.109779 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.609760586 +0000 UTC m=+135.277530164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.183444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.210630 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.210998 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.710984997 +0000 UTC m=+135.378754565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.233351 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.269498 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.303660 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.311814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4x9fq"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.312397 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.316887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.317589 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.817570043 +0000 UTC m=+135.485339611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.320984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.325686 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.330814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.342393 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xq6c8" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.362436 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r8q6j" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.419065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.419563 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:24.919546665 +0000 UTC m=+135.587316243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.517925 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.522129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.525098 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.025078267 +0000 UTC m=+135.692847845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.546180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.627990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.628679 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.128659197 +0000 UTC m=+135.796428775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.729912 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.730441 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.23041977 +0000 UTC m=+135.898189348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.808246 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" event={"ID":"c1affbee-c661-46d6-89cd-08977e347d3c","Type":"ContainerStarted","Data":"c1e9db6fc401844041a434822355a3f983b6573df2a43a287179c4afc2b3ad09"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.832625 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.833172 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.333158874 +0000 UTC m=+136.000928452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.835568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" event={"ID":"c6d4e7cb-581f-4404-b64f-03fb526edeaf","Type":"ContainerStarted","Data":"35b63a8e341feaddaa5ddbde81be5bd9701ccbb85be28c4b8ed30e8df75c4332"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.840807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" event={"ID":"45549dc9-0155-4d34-927c-25c5fb82872b","Type":"ContainerStarted","Data":"dbbad068385487f332045616155bfb5617c953d6b98f5eab607dedacb9a25bc2"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.874934 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" event={"ID":"07d98bec-099c-43a6-aa43-a96450505b5b","Type":"ContainerStarted","Data":"7cc2dcfe1a0c1eb0182673c8b0a2a311af6c30b19e32f337e123824a17c1dff1"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.882141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerStarted","Data":"9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.882342 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.887808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" event={"ID":"3671d10c-81c6-4c7f-9117-1c237e4efe51","Type":"ContainerStarted","Data":"cdbe87cc88169f7bb9eb9befcb7e128da6a1bf58032f35c81011d0127accaa12"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.893394 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.905336 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gqcsb" event={"ID":"5a1a3d13-a380-46b2-afd6-c8f5dc864f39","Type":"ContainerStarted","Data":"847088f07ff740eaa4db379e88736f02c9cceab7ccaccb17bbd72e1ff155d293"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.906958 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" event={"ID":"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1","Type":"ContainerStarted","Data":"ad121281fa6d596a2490624f246f4e66ddd169ea273cea368aa88763e76e4f34"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.914151 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerStarted","Data":"0281b55255f4efb1b0f1c85ffa5cb54711c643739e6e91bd25c713e06089b8a2"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.915959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerStarted","Data":"a4442532034d55aec7694e2aae62d46cc806fbd6825379546e85cefe87fe5880"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.921305 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.927867 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" event={"ID":"90c20a1b-2941-4f3e-937d-8629dc663dd2","Type":"ContainerStarted","Data":"ee26cc6fe324a4cce650c71c36d1f51f6d2f267ac1da92d7f5f55323b4d89d17"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.927925 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" event={"ID":"90c20a1b-2941-4f3e-937d-8629dc663dd2","Type":"ContainerStarted","Data":"a8607ca5e7a1d3fab2ddaab1535c5b8d9b2fa7bbfc1f26f114032c8071fe0f57"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.933337 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.933537 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.433519778 +0000 UTC m=+136.101289356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.933693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:24 crc kubenswrapper[4760]: E0121 15:49:24.934014 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.434006139 +0000 UTC m=+136.101775717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.935356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" event={"ID":"258aeabf-45e1-4b66-bec4-1c7f834e2b77","Type":"ContainerStarted","Data":"96c0604bc0fd3f7d99d121f99bdeb7c649e4e4e972a55d43d2fabed63fd22fd7"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.945435 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerStarted","Data":"a984ec214f84d006959a62186c68d9d7757bee7166c405d019dba84fee2ab96c"} Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.978934 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-74x59"] Jan 21 15:49:24 crc kubenswrapper[4760]: I0121 15:49:24.979689 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" event={"ID":"e527c232-4f49-4920-a0cc-403df50c3f9c","Type":"ContainerStarted","Data":"91a38018ffb76d52fdbaca039f760b64be203a5f6b876fed35eaed85cb987d34"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.005649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" event={"ID":"dd1ef85b-03cb-4332-98e6-bcc6d38933dd","Type":"ContainerStarted","Data":"d9d53521a6053440eaa5a501820b07bff4257a4d4e0a868e91ad2ab52f3e09dc"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.011482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gfd2m" event={"ID":"bed95d17-1666-4ad0-afea-faa4a683ed81","Type":"ContainerStarted","Data":"2b793f9c667c8cb9a38fa7e4a538ea7204157c9c829bef64a59fcfcb00331b2f"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.015870 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" event={"ID":"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace","Type":"ContainerStarted","Data":"5eb3737edeeff0b48ec26e2dc9bbe3ed27feda3da112dc7cb10183cdecfeed84"} Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.035048 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.038190 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.039878 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.539855664 +0000 UTC m=+136.207625242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.122125 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" podStartSLOduration=117.122097283 podStartE2EDuration="1m57.122097283s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.077536124 +0000 UTC m=+135.745305692" watchObservedRunningTime="2026-01-21 15:49:25.122097283 +0000 UTC m=+135.789866861" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.143242 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.145109 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.645093604 +0000 UTC m=+136.312863182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: W0121 15:49:25.215378 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eb59e83_0fc4_4e75_9ad8_7c5c10b40122.slice/crio-853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea WatchSource:0}: Error finding container 853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea: Status 404 returned error can't find the container with id 853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.244311 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.245951 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.745920716 +0000 UTC m=+136.413690294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.347732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.348126 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.848109397 +0000 UTC m=+136.515878975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.450563 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.450915 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.950893753 +0000 UTC m=+136.618663331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.450990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.451372 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:25.951364593 +0000 UTC m=+136.619134171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.552948 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.553475 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.05344824 +0000 UTC m=+136.721217818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.553827 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.554268 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.054257754 +0000 UTC m=+136.722027332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.557978 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" podStartSLOduration=117.55795253 podStartE2EDuration="1m57.55795253s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.557812964 +0000 UTC m=+136.225582542" watchObservedRunningTime="2026-01-21 15:49:25.55795253 +0000 UTC m=+136.225722108" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.602583 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-94jxh" podStartSLOduration=117.602555251 podStartE2EDuration="1m57.602555251s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.602475838 +0000 UTC m=+136.270245416" watchObservedRunningTime="2026-01-21 15:49:25.602555251 +0000 UTC m=+136.270324829" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.626813 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" podStartSLOduration=118.626784923 podStartE2EDuration="1m58.626784923s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:25.626189638 +0000 UTC m=+136.293959226" watchObservedRunningTime="2026-01-21 15:49:25.626784923 +0000 UTC m=+136.294554501" Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.655868 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.656078 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.156042728 +0000 UTC m=+136.823812306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.656120 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.656583 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.15657078 +0000 UTC m=+136.824340438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.756938 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.757122 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.25708415 +0000 UTC m=+136.924853758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.758159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.758592 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.258580753 +0000 UTC m=+136.926350401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.859887 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.865918 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.36588372 +0000 UTC m=+137.033653298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:25 crc kubenswrapper[4760]: I0121 15:49:25.973124 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:25 crc kubenswrapper[4760]: E0121 15:49:25.973513 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.4734957 +0000 UTC m=+137.141265288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.054777 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" event={"ID":"41967b98-5ae8-45a6-8ec2-1be35218fa5f","Type":"ContainerStarted","Data":"d3d42a5cd5bf7be8e1e44050f71ac879bb57ce30f3bcb4bb38d4e205f5eb32ee"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.061243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" event={"ID":"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace","Type":"ContainerStarted","Data":"8c20739278c68512e07eb06462f0a8648b853c3ab8fae9d284091a585ad14aae"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.062084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r8q6j" event={"ID":"cf277e1b-785c-4657-b83d-2e402a3ce097","Type":"ContainerStarted","Data":"7b957c27716f2dae182b9234fa0d47b5e4c331663f987af4c50ab3721fdd50f1"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.063149 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerStarted","Data":"ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.064985 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.069522 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.078354 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.078439 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.082109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" event={"ID":"816d3ef0-0471-4ee0-998b-947d78f8d3f3","Type":"ContainerStarted","Data":"504fc8ef509b7fb7efe935d14e810bb0e6d2043133c69f7bf553a9723be87421"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.089368 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.090861 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.590823189 +0000 UTC m=+137.258592777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.117190 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.117550 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.617537257 +0000 UTC m=+137.285306835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.096917 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.118727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" event={"ID":"e527c232-4f49-4920-a0cc-403df50c3f9c","Type":"ContainerStarted","Data":"cd0ffb04578ceced947540c2993f6ae9f6163eee78f8a5eb935c62ba1fb6b0de"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.145586 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podStartSLOduration=118.145540688 podStartE2EDuration="1m58.145540688s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.143285003 +0000 UTC m=+136.811054581" watchObservedRunningTime="2026-01-21 15:49:26.145540688 +0000 UTC m=+136.813310266" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.146307 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8jb7f"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.175673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tkwlz" event={"ID":"dd1ef85b-03cb-4332-98e6-bcc6d38933dd","Type":"ContainerStarted","Data":"8a90d781455204b7a6667a65369bca19a2feb6ff9df88dd1a77a5c516b93174e"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.202023 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" event={"ID":"c6d4e7cb-581f-4404-b64f-03fb526edeaf","Type":"ContainerStarted","Data":"826178fb6ca15302f542ef088cae35efca423b222dfd2870a98c64e4b0bac89c"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.223865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.224271 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.724221617 +0000 UTC m=+137.391991215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.258860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" event={"ID":"5d41f70b-9e7e-4e99-8fad-ad4a5a646df1","Type":"ContainerStarted","Data":"afa0d8fc5f6f03d932312788939854c41b0def5451651cfb1d438e1927dbae0d"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.262851 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.279001 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.284867 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-vqm67" podStartSLOduration=119.284842295 podStartE2EDuration="1m59.284842295s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.284223439 +0000 UTC m=+136.951993017" watchObservedRunningTime="2026-01-21 15:49:26.284842295 +0000 UTC m=+136.952611863" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.301748 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" event={"ID":"1096cfed-6553-45e5-927a-5169e506e758","Type":"ContainerStarted","Data":"571e64bd2af7984e1be710181d97b420af3fccb32508218a5956bc87a03f07b6"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.304919 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.332390 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.333101 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.83308016 +0000 UTC m=+137.500849738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.336873 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sqsn5" podStartSLOduration=118.336849259 podStartE2EDuration="1m58.336849259s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.33617947 +0000 UTC m=+137.003949048" watchObservedRunningTime="2026-01-21 15:49:26.336849259 +0000 UTC m=+137.004618837" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.366712 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xq6c8"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.378120 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.392353 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.398392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" event={"ID":"07d98bec-099c-43a6-aa43-a96450505b5b","Type":"ContainerStarted","Data":"373416f09205629699d231a7600359ce0570ff4ba0d17b5727a745eb3c240482"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.416181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" event={"ID":"258aeabf-45e1-4b66-bec4-1c7f834e2b77","Type":"ContainerStarted","Data":"86b96cf1e0be3607ff242445cec4a0db1cf7dc0d87205aca613e970c210b3d91"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.416477 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.442963 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.444686 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:26.944656427 +0000 UTC m=+137.612426005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.447902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" event={"ID":"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122","Type":"ContainerStarted","Data":"853a8f9c3c3b9cb9fffcd56a9f971029dacc68c9e4b89a8aedf14534ba83ddea"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.469215 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gqcsb" event={"ID":"5a1a3d13-a380-46b2-afd6-c8f5dc864f39","Type":"ContainerStarted","Data":"136324e21c3e5566aec09764ac1fb9aa0f5c291fc5274bc2717b05eaaa548463"} Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.469270 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.471825 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qmwgj" podStartSLOduration=118.471812092 podStartE2EDuration="1m58.471812092s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.462466498 +0000 UTC m=+137.130236076" watchObservedRunningTime="2026-01-21 15:49:26.471812092 +0000 UTC m=+137.139581670" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474384 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6cjk"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-sztm4"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474525 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.474558 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.501511 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.528043 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" podStartSLOduration=118.528023384 podStartE2EDuration="1m58.528023384s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.524908452 +0000 UTC m=+137.192678030" watchObservedRunningTime="2026-01-21 15:49:26.528023384 +0000 UTC m=+137.195792962" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.529512 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.533730 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.548417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.549362 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.049349133 +0000 UTC m=+137.717118711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.556386 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gqcsb" podStartSLOduration=118.556359529 podStartE2EDuration="1m58.556359529s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:26.550994203 +0000 UTC m=+137.218763791" watchObservedRunningTime="2026-01-21 15:49:26.556359529 +0000 UTC m=+137.224129107" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.628620 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.650983 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.652758 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.152734595 +0000 UTC m=+137.820504173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.668004 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5p8jw"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.688168 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m4t9n"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.697900 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.754524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.755350 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.255316372 +0000 UTC m=+137.923085950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.857281 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.857683 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.3576632 +0000 UTC m=+138.025432778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.885505 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nq7q6" Jan 21 15:49:26 crc kubenswrapper[4760]: I0121 15:49:26.959017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:26 crc kubenswrapper[4760]: E0121 15:49:26.959764 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.459741776 +0000 UTC m=+138.127511354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.062267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.063362 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.563319066 +0000 UTC m=+138.231088654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.164374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.164817 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.664804077 +0000 UTC m=+138.332573645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.268694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.269180 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.7691613 +0000 UTC m=+138.436930868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.370634 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.371092 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.871071829 +0000 UTC m=+138.538841407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.471353 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.471712 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:27.971696124 +0000 UTC m=+138.639465702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.515334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerStarted","Data":"6951e69f04a2a5c86ea7d6c3049c25b0a87bd8c7e48d7d29e32024371bddd38f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.516661 4760 generic.go:334] "Generic (PLEG): container finished" podID="816d3ef0-0471-4ee0-998b-947d78f8d3f3" containerID="57f4636c0de63b31598b791b50c3b59731a69863de96a66105752fd199e9640c" exitCode=0 Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.516703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" event={"ID":"816d3ef0-0471-4ee0-998b-947d78f8d3f3","Type":"ContainerDied","Data":"57f4636c0de63b31598b791b50c3b59731a69863de96a66105752fd199e9640c"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.539561 4760 csr.go:261] certificate signing request csr-d4vkm is approved, waiting to be issued Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.539832 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" event={"ID":"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8","Type":"ContainerStarted","Data":"3066ddc9f993d5a4cdbe2fd509273742957152a8591d6cdb8e8c78ebc7486ca4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.551787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" event={"ID":"ea412328-1a05-4865-94c4-ab85c8694e6f","Type":"ContainerStarted","Data":"d4f15d8db190493af0c921f3c075ab12754b4d4fad1e429e43949d3567c3fb3f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.555352 4760 csr.go:257] certificate signing request csr-d4vkm is issued Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.567635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" event={"ID":"c1affbee-c661-46d6-89cd-08977e347d3c","Type":"ContainerStarted","Data":"319b91be82b093ede114f09b136655a032ad376efb7a7917eefcd7693b8a268e"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.572272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.573461 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.073447937 +0000 UTC m=+138.741217515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.579461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" event={"ID":"5537d4e7-89b1-40bb-b87e-d0d1c59840c5","Type":"ContainerStarted","Data":"ff74951c2bf4395c6ab99f0b2fb0d0049393afbff0b862159575d8f50a0d1203"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.592275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" event={"ID":"e527c232-4f49-4920-a0cc-403df50c3f9c","Type":"ContainerStarted","Data":"ff3c870150f5eee67b98f07adad30c3e65009abe7dfb700d6b5e742dda59f92f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.603813 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" event={"ID":"41967b98-5ae8-45a6-8ec2-1be35218fa5f","Type":"ContainerStarted","Data":"1c1b5c0d88654fe18f4018714f589837ecc6c328844d97c214a0d418d070b8f4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.610973 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" event={"ID":"b259e37c-8e0b-43ee-8164-320dffe1905d","Type":"ContainerStarted","Data":"264fd2ee42c90ec038226a34f718679733a73fcb442d2c2891d0e99326349c4e"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.613977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"2da73fc179b73d2b7ad845816bfa2d94f58742bba85caf4081f77f4c31be8f16"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.621667 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" event={"ID":"e796ed4a-36e3-4630-9c37-3f5b49b6483d","Type":"ContainerStarted","Data":"a5e8595207e766355beed2ed673990d6844e8e471f89d3c0761245739948d02d"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.640842 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdfkz" podStartSLOduration=120.640816889 podStartE2EDuration="2m0.640816889s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.638900968 +0000 UTC m=+138.306670546" watchObservedRunningTime="2026-01-21 15:49:27.640816889 +0000 UTC m=+138.308586467" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.650776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-gfd2m" event={"ID":"bed95d17-1666-4ad0-afea-faa4a683ed81","Type":"ContainerStarted","Data":"bf7a3b08a44bce5c63fb288662b66782b947203addb149284138b63d8149d5ee"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.672893 4760 generic.go:334] "Generic (PLEG): container finished" podID="45549dc9-0155-4d34-927c-25c5fb82872b" containerID="07dd7df136ea7e85048c6ef5ac9c5bd97a0f5d003149a63158916aecd1a9c4cf" exitCode=0 Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.673022 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" event={"ID":"45549dc9-0155-4d34-927c-25c5fb82872b","Type":"ContainerDied","Data":"07dd7df136ea7e85048c6ef5ac9c5bd97a0f5d003149a63158916aecd1a9c4cf"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.674033 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.674166 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.174144805 +0000 UTC m=+138.841914383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.674437 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.676238 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.176227432 +0000 UTC m=+138.843997220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.687039 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-hcqpm" podStartSLOduration=119.687011997 podStartE2EDuration="1m59.687011997s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.685991274 +0000 UTC m=+138.353760852" watchObservedRunningTime="2026-01-21 15:49:27.687011997 +0000 UTC m=+138.354781595" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.704732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r8q6j" event={"ID":"cf277e1b-785c-4657-b83d-2e402a3ce097","Type":"ContainerStarted","Data":"919c705d832be28d5c785682b138a450cbb8b2e313afa1acbda1d8424154b56f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.729586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" event={"ID":"7ae6da0d-f707-4d3e-8625-cae54fe221d0","Type":"ContainerStarted","Data":"1a3348f7f8149ce3dae08933d7d1a1895bd776e5448215cdb2a7b332c1b7d8b7"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.744276 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" event={"ID":"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f","Type":"ContainerStarted","Data":"c08cf747f09d66fb6c922edbc2434c1006d49c566fdcfc7bdbcc25fa2ffd7954"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.752451 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" event={"ID":"0df700c2-3091-4770-b404-cc81bc416387","Type":"ContainerStarted","Data":"91d5d779d942c7acddecc7c85bec2107bb2fe521b4a4a59636d8284b4e8c0fda"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.761507 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sztm4" event={"ID":"326950d2-00f8-43d7-9cbd-6a337226d219","Type":"ContainerStarted","Data":"ea05ccb9dde7429d572c32a36bbc5a15b3d58a0d3483c8119153a7b148793584"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.767749 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" event={"ID":"a02e3350-da29-44d4-be95-ae71458cc1e2","Type":"ContainerStarted","Data":"05d87ee2a50da3353a715d9c194581d6388335fd7bd10c23ce2825a399f66fcc"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.769633 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-74x59" podStartSLOduration=119.769616582 podStartE2EDuration="1m59.769616582s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.768091708 +0000 UTC m=+138.435861296" watchObservedRunningTime="2026-01-21 15:49:27.769616582 +0000 UTC m=+138.437386160" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.784440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.785227 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.2852089 +0000 UTC m=+138.952978478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.789691 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerStarted","Data":"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.809653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" event={"ID":"26dbd752-d785-488a-879b-543307d0a4cd","Type":"ContainerStarted","Data":"d776c0048557411022fa41f2863f11388f488c09401876fd64a85c4f306db27f"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.811246 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" event={"ID":"3671d10c-81c6-4c7f-9117-1c237e4efe51","Type":"ContainerStarted","Data":"e82c62da4291012bcebd2e77eedc6f65e36cd0d13ef960f0f5e04a23da18e63e"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.813378 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" event={"ID":"76967888-2735-467c-a288-a7bfe13f5690","Type":"ContainerStarted","Data":"fa4567d20782556bf5e3c69a2bfa2511dc569082032415d98bfeb7481d4cdb74"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.815887 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xq6c8" event={"ID":"20be687d-c18c-434b-9ccf-f6d2ec79e0f3","Type":"ContainerStarted","Data":"1668d132d042fda2bb3705dab7cb711d484ce0fe17fe2ed07e9394fee9a4ace4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.817065 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" event={"ID":"c197ee1e-f79d-4867-8033-ba4b934a9f86","Type":"ContainerStarted","Data":"ec83de2f53bf27b19c216597facae5bed34388d3b7b561d1e34fe983c5e1e825"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.829836 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" event={"ID":"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122","Type":"ContainerStarted","Data":"6bdd6db4c3a8d7ade3708619509a2e960235f2e6715d6b58785e50e6b98c3433"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.829905 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" event={"ID":"7eb59e83-0fc4-4e75-9ad8-7c5c10b40122","Type":"ContainerStarted","Data":"430c7ec0cb9e5b6e88e1a6af5e8a0ae7cc8476b25bd08d6bb85fe55ee5fa45be"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.838087 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-gfd2m" podStartSLOduration=119.83805889 podStartE2EDuration="1m59.83805889s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.83497764 +0000 UTC m=+138.502747238" watchObservedRunningTime="2026-01-21 15:49:27.83805889 +0000 UTC m=+138.505828468" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.842623 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" event={"ID":"9080e6bd-e0c8-46c4-a267-9413c3e0b162","Type":"ContainerStarted","Data":"7cefb6e3ceecc77f6d67d0163121a568d3f22f6e0a1c95f128e384796886148c"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.889357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.877283 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerStarted","Data":"1e75b9ece68e658666e9933bb9705b3b459094ff3173c1971fa155503f65bca4"} Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.933291 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:27 crc kubenswrapper[4760]: E0121 15:49:27.937418 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.43739095 +0000 UTC m=+139.105160528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.943269 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:27 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:27 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:27 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.957118 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.961814 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-clnlg" podStartSLOduration=119.961782809 podStartE2EDuration="1m59.961782809s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:27.888812641 +0000 UTC m=+138.556582229" watchObservedRunningTime="2026-01-21 15:49:27.961782809 +0000 UTC m=+138.629552387" Jan 21 15:49:27 crc kubenswrapper[4760]: I0121 15:49:27.998767 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" event={"ID":"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a","Type":"ContainerStarted","Data":"f1b9bf1b6a640c2c1393804fad7629b80782afc8d205bf6f5d604dca70493aa4"} Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.049860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.050480 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.55046097 +0000 UTC m=+139.218230548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.062594 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-b6gcm" podStartSLOduration=120.062562301 podStartE2EDuration="2m0.062562301s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.047588559 +0000 UTC m=+138.715358137" watchObservedRunningTime="2026-01-21 15:49:28.062562301 +0000 UTC m=+138.730331879" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.066806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" event={"ID":"1096cfed-6553-45e5-927a-5169e506e758","Type":"ContainerStarted","Data":"23a99f229f9454f45b766f22ac33f611fffc28e1d7261fc2797f7adc91f4603b"} Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.067983 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.068044 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.086352 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" podStartSLOduration=121.086317363 podStartE2EDuration="2m1.086317363s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.084243985 +0000 UTC m=+138.752013563" watchObservedRunningTime="2026-01-21 15:49:28.086317363 +0000 UTC m=+138.754086941" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.138504 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-r8q6j" podStartSLOduration=7.138479173 podStartE2EDuration="7.138479173s" podCreationTimestamp="2026-01-21 15:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.115046085 +0000 UTC m=+138.782815663" watchObservedRunningTime="2026-01-21 15:49:28.138479173 +0000 UTC m=+138.806248751" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.142839 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-fg6vb" podStartSLOduration=121.142830017 podStartE2EDuration="2m1.142830017s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:28.137783194 +0000 UTC m=+138.805552772" watchObservedRunningTime="2026-01-21 15:49:28.142830017 +0000 UTC m=+138.810599595" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.154083 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.177866 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.677839534 +0000 UTC m=+139.345609112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.259001 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.259741 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.759723218 +0000 UTC m=+139.427492786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.361445 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.362159 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.862136869 +0000 UTC m=+139.529906447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.382129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.469743 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.470180 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:28.970159986 +0000 UTC m=+139.637929564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.557508 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 15:44:27 +0000 UTC, rotation deadline is 2026-10-11 04:14:09.016124092 +0000 UTC Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.557970 4760 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6300h24m40.458159962s for next certificate rotation Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.571339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.571765 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.071743651 +0000 UTC m=+139.739513229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.672550 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.672890 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.172872478 +0000 UTC m=+139.840642056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.774521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.774951 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.274937574 +0000 UTC m=+139.942707152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.875822 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.876812 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.376789029 +0000 UTC m=+140.044558607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.919688 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:28 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:28 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:28 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.919749 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:28 crc kubenswrapper[4760]: I0121 15:49:28.977803 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:28 crc kubenswrapper[4760]: E0121 15:49:28.978915 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.478899527 +0000 UTC m=+140.146669095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.083399 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.084159 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.584144437 +0000 UTC m=+140.251914015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.118136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" event={"ID":"c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a","Type":"ContainerStarted","Data":"f802f181404307face35bb14e014757f96cac61ff1a14b625887611c1d35e9ec"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.119222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.129585 4760 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mcpg9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.129652 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" podUID="c8bfbc82-e9e7-4b1d-93a5-a0b2528ebd2a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.147602 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" event={"ID":"816d3ef0-0471-4ee0-998b-947d78f8d3f3","Type":"ContainerStarted","Data":"52d1cc6b2014e4136a649a8f04036d6a683140a2dfd6634aa0a8653772c65bc5"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.147668 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.167273 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" podStartSLOduration=121.167241422 podStartE2EDuration="2m1.167241422s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.153620238 +0000 UTC m=+139.821389816" watchObservedRunningTime="2026-01-21 15:49:29.167241422 +0000 UTC m=+139.835011000" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.193585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.194141 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.694122317 +0000 UTC m=+140.361891895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.208889 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" podStartSLOduration=121.208852648 podStartE2EDuration="2m1.208852648s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.195112848 +0000 UTC m=+139.862882426" watchObservedRunningTime="2026-01-21 15:49:29.208852648 +0000 UTC m=+139.876622246" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.215299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" event={"ID":"ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8","Type":"ContainerStarted","Data":"acc419561160969bbaa26c7e60d5ab0ca192d39e78522bcb1ad74cdc24f107ca"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.216933 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.220899 4760 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-gnjlk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.220976 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" podUID="ed7d129a-ef1e-4de8-a0c6-c00d9efda2d8" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.230433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xq6c8" event={"ID":"20be687d-c18c-434b-9ccf-f6d2ec79e0f3","Type":"ContainerStarted","Data":"c6eac03c2c31357f77e204565f8ebdece7df3cdf72f97c1dbcd2718492be5dca"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.254000 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" podStartSLOduration=121.253970131 podStartE2EDuration="2m1.253970131s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.253900708 +0000 UTC m=+139.921670286" watchObservedRunningTime="2026-01-21 15:49:29.253970131 +0000 UTC m=+139.921739709" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.279231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerStarted","Data":"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.280563 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.292850 4760 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-p467d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.293394 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.299547 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xq6c8" podStartSLOduration=9.299528083 podStartE2EDuration="9.299528083s" podCreationTimestamp="2026-01-21 15:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.297873453 +0000 UTC m=+139.965643031" watchObservedRunningTime="2026-01-21 15:49:29.299528083 +0000 UTC m=+139.967297661" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.303902 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.305675 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.805657292 +0000 UTC m=+140.473426870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.352745 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" podStartSLOduration=122.352709947 podStartE2EDuration="2m2.352709947s" podCreationTimestamp="2026-01-21 15:47:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.343819512 +0000 UTC m=+140.011589110" watchObservedRunningTime="2026-01-21 15:49:29.352709947 +0000 UTC m=+140.020479525" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.374383 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" event={"ID":"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f","Type":"ContainerStarted","Data":"b4f9c0e7befee4542127ebaea8bb536114f236d98e50ca47f181f0bc4938f1a7"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.393769 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" event={"ID":"a9e71cc4-57de-4f68-9b1f-ddf9ce2deace","Type":"ContainerStarted","Data":"b19b85a160050f4a35381b3df55bf10ca83a9e9946981d9cf3a7031f1ad004c1"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.408812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.411505 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:29.911483046 +0000 UTC m=+140.579252614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.418091 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerStarted","Data":"d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.471424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sztm4" event={"ID":"326950d2-00f8-43d7-9cbd-6a337226d219","Type":"ContainerStarted","Data":"b96526153aa10134f44e1b473e6ceecf88f946f9595c66a23a0dda52a5c6828a"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.475649 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" event={"ID":"ea412328-1a05-4865-94c4-ab85c8694e6f","Type":"ContainerStarted","Data":"3d011a77e00776983e60c01efe2e1b47895a6d54f3165efe327d48417675dd49"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.477367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.482239 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w6lsk" podStartSLOduration=121.482226561 podStartE2EDuration="2m1.482226561s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.443953326 +0000 UTC m=+140.111722904" watchObservedRunningTime="2026-01-21 15:49:29.482226561 +0000 UTC m=+140.149996139" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.484485 4760 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5qwrq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.484567 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" podUID="ea412328-1a05-4865-94c4-ab85c8694e6f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.502946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" event={"ID":"e796ed4a-36e3-4630-9c37-3f5b49b6483d","Type":"ContainerStarted","Data":"bb0e287dd7373e957d9cb8ccf39ebe206aeb745064c7a3e98e634c117a445440"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.509960 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.510460 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.01042323 +0000 UTC m=+140.678192808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.519994 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" event={"ID":"9080e6bd-e0c8-46c4-a267-9413c3e0b162","Type":"ContainerStarted","Data":"58b478a2c76afcc795b7af149365f63854f8eea5771a08555a3c0c61e7fbae56"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.520184 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" podStartSLOduration=121.520162801 podStartE2EDuration="2m1.520162801s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.519741293 +0000 UTC m=+140.187510871" watchObservedRunningTime="2026-01-21 15:49:29.520162801 +0000 UTC m=+140.187932379" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.521356 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" podStartSLOduration=121.521350391 podStartE2EDuration="2m1.521350391s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.483467963 +0000 UTC m=+140.151237561" watchObservedRunningTime="2026-01-21 15:49:29.521350391 +0000 UTC m=+140.189119959" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.557640 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f2k2z" podStartSLOduration=121.557611761 podStartE2EDuration="2m1.557611761s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.554961169 +0000 UTC m=+140.222730767" watchObservedRunningTime="2026-01-21 15:49:29.557611761 +0000 UTC m=+140.225381349" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.580940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" event={"ID":"a02e3350-da29-44d4-be95-ae71458cc1e2","Type":"ContainerStarted","Data":"da55f669feccb65ae762e3e712eaf0f7b598b57f740d86b842cd0d1f01c477c0"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.597315 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" event={"ID":"c197ee1e-f79d-4867-8033-ba4b934a9f86","Type":"ContainerStarted","Data":"3fc25ec904d65a5422adf816c8d05d1ab990d2140444d805b4d0d5daef4ca2cb"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.599178 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" event={"ID":"26dbd752-d785-488a-879b-543307d0a4cd","Type":"ContainerStarted","Data":"c008d6908fb3c42c8728f2a878ae79bd91a950c83c3c25547eb027f33fea9917"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.608469 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rw8t2" podStartSLOduration=121.608450936 podStartE2EDuration="2m1.608450936s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.607828229 +0000 UTC m=+140.275597807" watchObservedRunningTime="2026-01-21 15:49:29.608450936 +0000 UTC m=+140.276220514" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.612132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.616029 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.116007624 +0000 UTC m=+140.783777202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.677659 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" event={"ID":"0df700c2-3091-4770-b404-cc81bc416387","Type":"ContainerStarted","Data":"c8751e38c84f6894a7013d6ec92e7116f6c399d891ec685510ac630cc8fbcc02"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.688365 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" podStartSLOduration=121.688342766 podStartE2EDuration="2m1.688342766s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.648900382 +0000 UTC m=+140.316669960" watchObservedRunningTime="2026-01-21 15:49:29.688342766 +0000 UTC m=+140.356112344" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.689752 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5p8jw" podStartSLOduration=121.689746805 podStartE2EDuration="2m1.689746805s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.68748156 +0000 UTC m=+140.355251158" watchObservedRunningTime="2026-01-21 15:49:29.689746805 +0000 UTC m=+140.357516383" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.704589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" event={"ID":"3671d10c-81c6-4c7f-9117-1c237e4efe51","Type":"ContainerStarted","Data":"581ac2628d893e2ac9071467018720ae69797aaa81f706d13ab8c068a5036025"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.713543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.714776 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.21475816 +0000 UTC m=+140.882527738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.722453 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8rjck" podStartSLOduration=121.722426144 podStartE2EDuration="2m1.722426144s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.721420471 +0000 UTC m=+140.389190049" watchObservedRunningTime="2026-01-21 15:49:29.722426144 +0000 UTC m=+140.390195722" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.751446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" event={"ID":"7ae6da0d-f707-4d3e-8625-cae54fe221d0","Type":"ContainerStarted","Data":"2b94a229f4949cfca4a5147709b3025c6e8c67d1b1e52dd2cf563b8a1bb2e2e7"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.779695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" event={"ID":"5537d4e7-89b1-40bb-b87e-d0d1c59840c5","Type":"ContainerStarted","Data":"006eef7647ac91ac647e61fd1456530847a7fc7af3657bf48d48dc659de4ae27"} Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.780386 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.820362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.823230 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.323207876 +0000 UTC m=+140.990977524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.833551 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4x9fq" podStartSLOduration=121.833528111 podStartE2EDuration="2m1.833528111s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.820523412 +0000 UTC m=+140.488293010" watchObservedRunningTime="2026-01-21 15:49:29.833528111 +0000 UTC m=+140.501297689" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.833882 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dm455" podStartSLOduration=121.833874676 podStartE2EDuration="2m1.833874676s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:29.777372172 +0000 UTC m=+140.445141750" watchObservedRunningTime="2026-01-21 15:49:29.833874676 +0000 UTC m=+140.501644254" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.912442 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:29 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:29 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:29 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.912493 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:29 crc kubenswrapper[4760]: I0121 15:49:29.928928 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:29 crc kubenswrapper[4760]: E0121 15:49:29.929388 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.429366664 +0000 UTC m=+141.097136242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.032198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.061405 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.561380283 +0000 UTC m=+141.229149861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.083756 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" podStartSLOduration=122.083729866 podStartE2EDuration="2m2.083729866s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.082015934 +0000 UTC m=+140.749785512" watchObservedRunningTime="2026-01-21 15:49:30.083729866 +0000 UTC m=+140.751499444" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.171577 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.171996 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.671978709 +0000 UTC m=+141.339748287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.273299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.273890 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.773862667 +0000 UTC m=+141.441632245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.375392 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.377031 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.877007458 +0000 UTC m=+141.544777036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.437640 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.478877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.479298 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:30.979285263 +0000 UTC m=+141.647054841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.580440 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.582448 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.081339558 +0000 UTC m=+141.749109136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.682299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.682714 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.182698294 +0000 UTC m=+141.850467872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.783886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.784077 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.28403829 +0000 UTC m=+141.951807868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.784241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.784687 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.284666696 +0000 UTC m=+141.952436264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.789941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-x7qrd" event={"ID":"a02e3350-da29-44d4-be95-ae71458cc1e2","Type":"ContainerStarted","Data":"f5a79bc3b21251beaf65405f3d7627eecef3296e2069a7a6925d45eaf046da22"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.793048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"9ca5e20d8eba67e0c66055330a5d3c12bb4c525caa67803fb9071130b0adcd98"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.795357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" event={"ID":"a6ac7afc-7d8c-44c4-90dc-3c5a9dd3439f","Type":"ContainerStarted","Data":"bc2085b51ad37c3a29a405b00dbf27f62aab19bd9fe51e93c67d54a3f4f9d143"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.798395 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" event={"ID":"5537d4e7-89b1-40bb-b87e-d0d1c59840c5","Type":"ContainerStarted","Data":"223136c8f973642727b780b8fea2883d4cb76aad5607ecfd8d9211b39dc6ebde"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.802211 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" event={"ID":"45549dc9-0155-4d34-927c-25c5fb82872b","Type":"ContainerStarted","Data":"c7661bdcc56dd6cb61ebe357a18435332855c52090e6bdbd4ff9f0f69f02982f"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.805556 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" event={"ID":"b259e37c-8e0b-43ee-8164-320dffe1905d","Type":"ContainerStarted","Data":"39889721e735af32009804cd3a2cd0935e512aa6d4021cc3fc8acdc0246bee38"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.805597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" event={"ID":"b259e37c-8e0b-43ee-8164-320dffe1905d","Type":"ContainerStarted","Data":"c6135fd5687879ff1df7249574b99589d59ac3c80b3e05620488edf5a88b9955"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.809282 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" event={"ID":"7ae6da0d-f707-4d3e-8625-cae54fe221d0","Type":"ContainerStarted","Data":"ec904739c476643719c2a1d69779840725de51f15d44866ef129ffd918de0c03"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.813653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-sztm4" event={"ID":"326950d2-00f8-43d7-9cbd-6a337226d219","Type":"ContainerStarted","Data":"a4ccd578675a80aea43f433fa6666afa301545dda8cd300f96d150b1e9eafa1f"} Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.824651 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-gnjlk" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.829584 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zg9b5" podStartSLOduration=122.82955507 podStartE2EDuration="2m2.82955507s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.825076391 +0000 UTC m=+141.492845969" watchObservedRunningTime="2026-01-21 15:49:30.82955507 +0000 UTC m=+141.497324648" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.857047 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mcpg9" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.864372 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6cjk" podStartSLOduration=122.864349708 podStartE2EDuration="2m2.864349708s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.861449095 +0000 UTC m=+141.529218683" watchObservedRunningTime="2026-01-21 15:49:30.864349708 +0000 UTC m=+141.532119276" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.884880 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-sztm4" podStartSLOduration=10.884857333 podStartE2EDuration="10.884857333s" podCreationTimestamp="2026-01-21 15:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.882730453 +0000 UTC m=+141.550500021" watchObservedRunningTime="2026-01-21 15:49:30.884857333 +0000 UTC m=+141.552626911" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.885962 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.886195 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.386160758 +0000 UTC m=+142.053930336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.886380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.890731 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.39070865 +0000 UTC m=+142.058478218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.917616 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:30 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:30 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:30 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.917696 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.924682 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" podStartSLOduration=122.924651932 podStartE2EDuration="2m2.924651932s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.918672949 +0000 UTC m=+141.586442547" watchObservedRunningTime="2026-01-21 15:49:30.924651932 +0000 UTC m=+141.592421510" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.955718 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8jb7f" podStartSLOduration=122.955693901 podStartE2EDuration="2m2.955693901s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:30.955470872 +0000 UTC m=+141.623240450" watchObservedRunningTime="2026-01-21 15:49:30.955693901 +0000 UTC m=+141.623463469" Jan 21 15:49:30 crc kubenswrapper[4760]: I0121 15:49:30.988473 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:30 crc kubenswrapper[4760]: E0121 15:49:30.988880 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.48886199 +0000 UTC m=+142.156631568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.091032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.092102 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.592075385 +0000 UTC m=+142.259845133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.191693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.192155 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.692138156 +0000 UTC m=+142.359907724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.292985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.293358 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.793344126 +0000 UTC m=+142.461113704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.396141 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.396601 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.896578361 +0000 UTC m=+142.564347939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.465390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.466379 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.475296 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.477222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.499063 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.499614 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:31.999595087 +0000 UTC m=+142.667364665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.600788 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.601091 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:32.101068457 +0000 UTC m=+142.768838035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.601810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.603171 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 15:49:32.103159326 +0000 UTC m=+142.770928904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-l6q9j" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.640935 4760 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.703014 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:31 crc kubenswrapper[4760]: E0121 15:49:31.703447 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 15:49:32.203428346 +0000 UTC m=+142.871197924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.714052 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.715252 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.718316 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.734255 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.769726 4760 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T15:49:31.64096498Z","Handler":null,"Name":""} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.772756 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5qwrq" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.776441 4760 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.776483 4760 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.805676 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.805750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.806222 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.806304 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.817861 4760 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.817904 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.869791 4760 generic.go:334] "Generic (PLEG): container finished" podID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerID="d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70" exitCode=0 Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.869963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerDied","Data":"d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70"} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.902823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"7ad533ffee23bf4c96e8ec8495f434e503910f12c84fcebd862c2efa79c68d5b"} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.902888 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.904288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"a98e86f6c37eb47064b95dc41766a077ac18301a350ce864f20ccd94994b7565"} Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.909541 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.912001 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.912044 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.912084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.913383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.913628 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.920835 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.921775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.922355 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:31 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:31 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:31 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.922398 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.947698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jx6dn" Jan 21 15:49:31 crc kubenswrapper[4760]: I0121 15:49:31.973258 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.014378 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.014787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.014817 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.040213 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-l6q9j\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.090883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"community-operators-bcf5p\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.121892 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122164 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122181 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.122602 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.123177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.137887 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.138875 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.194578 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.195874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"certified-operators-p6nql\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.223799 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.223847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.223885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.266654 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.306002 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.307009 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.324757 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.325728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:49:32 crc kubenswrapper[4760]: E0121 15:49:32.325850 4760 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") : UnmountVolume.NewUnmounter failed for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json]: open /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json]: open /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~csi/pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8/vol_data.json: no such file or directory" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326102 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.326800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.350635 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.358848 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.428404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.428804 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.428898 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.431052 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"community-operators-5nbxl\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.508963 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z4gv8" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.514739 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.530602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.530677 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.530707 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.531238 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.531475 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.590358 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"certified-operators-x5x5q\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.636437 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.926121 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" event={"ID":"73565c46-9349-48bf-9145-e59424ba78f6","Type":"ContainerStarted","Data":"2ce397f95d29ea7f6cd5f014664cf2cb4716f37a54780bd0e3e2425ebe774885"} Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.928583 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:32 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:32 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:32 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.928632 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:32 crc kubenswrapper[4760]: I0121 15:49:32.964975 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-m4t9n" podStartSLOduration=12.964955944 podStartE2EDuration="12.964955944s" podCreationTimestamp="2026-01-21 15:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:32.96249144 +0000 UTC m=+143.630261018" watchObservedRunningTime="2026-01-21 15:49:32.964955944 +0000 UTC m=+143.632725522" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.100629 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:49:33 crc kubenswrapper[4760]: W0121 15:49:33.112772 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d974904_dd7e_42df_8d49_3c5633b30767.slice/crio-5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d WatchSource:0}: Error finding container 5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d: Status 404 returned error can't find the container with id 5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.189889 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.189953 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.190492 4760 patch_prober.go:28] interesting pod/downloads-7954f5f757-gqcsb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.190510 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gqcsb" podUID="5a1a3d13-a380-46b2-afd6-c8f5dc864f39" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.195179 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.313968 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.317248 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.318307 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.334273 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.379339 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.382167 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.388074 4760 patch_prober.go:28] interesting pod/console-f9d7485db-clnlg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.388150 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-clnlg" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.407898 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.546006 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.635687 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.863540 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.878703 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:49:33 crc kubenswrapper[4760]: E0121 15:49:33.878954 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerName="collect-profiles" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.878972 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerName="collect-profiles" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.879103 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" containerName="collect-profiles" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.880054 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.884872 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.896954 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.909866 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.914710 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:33 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:33 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:33 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.914749 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.937495 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" event={"ID":"a24dcb12-1228-4acf-bea2-864a7c159e6f","Type":"ContainerDied","Data":"6951e69f04a2a5c86ea7d6c3049c25b0a87bd8c7e48d7d29e32024371bddd38f"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.937849 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6951e69f04a2a5c86ea7d6c3049c25b0a87bd8c7e48d7d29e32024371bddd38f" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.938023 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.942762 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerID="afb748d1e3303e9c354838b8efea3a1db5673f0417c1ed47429a02ba7c78d173" exitCode=0 Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.942949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"afb748d1e3303e9c354838b8efea3a1db5673f0417c1ed47429a02ba7c78d173"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.943024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerStarted","Data":"04845806ce311f8c329c8bcbddee515e27f30b40e982c655baf6a2792e30a7a8"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.944624 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.949846 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerStarted","Data":"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.950060 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerStarted","Data":"5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.950289 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.965063 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerID="298f3443bb35cede4296fc84eb0c6530e2963c3e90fd74ab5b5a237306e9d7f0" exitCode=0 Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.965209 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"298f3443bb35cede4296fc84eb0c6530e2963c3e90fd74ab5b5a237306e9d7f0"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.965254 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerStarted","Data":"b49eabd9c40dfcff1229e3fce7a175dcd666f9a87becb64c24e0cea1a2f942b3"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.970270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerStarted","Data":"44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.970487 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerStarted","Data":"0b0a7331696e324346519fa26d2e7eaf67a45bf400da7b1769b77d9a40dd4ed9"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.974088 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerID="77cf9d1328c6c7c43e38bfbe89cf385c04c30e3e3af785877e99bb17caac4c54" exitCode=0 Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.974609 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"77cf9d1328c6c7c43e38bfbe89cf385c04c30e3e3af785877e99bb17caac4c54"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.974721 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerStarted","Data":"a7babcd6222774dab124948469e3fbae711626933b44ca524c6ab5d5470092df"} Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.984858 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9stx" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.985608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") pod \"a24dcb12-1228-4acf-bea2-864a7c159e6f\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.985726 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") pod \"a24dcb12-1228-4acf-bea2-864a7c159e6f\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.985773 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") pod \"a24dcb12-1228-4acf-bea2-864a7c159e6f\" (UID: \"a24dcb12-1228-4acf-bea2-864a7c159e6f\") " Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.986179 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.986237 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.986280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.989864 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume" (OuterVolumeSpecName: "config-volume") pod "a24dcb12-1228-4acf-bea2-864a7c159e6f" (UID: "a24dcb12-1228-4acf-bea2-864a7c159e6f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:33 crc kubenswrapper[4760]: I0121 15:49:33.995722 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" podStartSLOduration=125.995697878 podStartE2EDuration="2m5.995697878s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:33.984932523 +0000 UTC m=+144.652702101" watchObservedRunningTime="2026-01-21 15:49:33.995697878 +0000 UTC m=+144.663467456" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.008738 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4" (OuterVolumeSpecName: "kube-api-access-kt5q4") pod "a24dcb12-1228-4acf-bea2-864a7c159e6f" (UID: "a24dcb12-1228-4acf-bea2-864a7c159e6f"). InnerVolumeSpecName "kube-api-access-kt5q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.009684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a24dcb12-1228-4acf-bea2-864a7c159e6f" (UID: "a24dcb12-1228-4acf-bea2-864a7c159e6f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088412 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088474 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088508 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24dcb12-1228-4acf-bea2-864a7c159e6f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088519 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5q4\" (UniqueName: \"kubernetes.io/projected/a24dcb12-1228-4acf-bea2-864a7c159e6f-kube-api-access-kt5q4\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088530 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a24dcb12-1228-4acf-bea2-864a7c159e6f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.088942 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.090344 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.111460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"redhat-marketplace-6xd6s\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.199976 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.283843 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.314216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.314755 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.356295 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.358224 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.364088 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.365635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.369383 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.392769 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.392914 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.392961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.493911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494549 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.494702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.495632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.495754 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.518623 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"redhat-marketplace-6ndfg\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.569891 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:49:34 crc kubenswrapper[4760]: W0121 15:49:34.582262 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0416bf01_ef39_4a1b_b8ca_8e02ea2882ac.slice/crio-ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b WatchSource:0}: Error finding container ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b: Status 404 returned error can't find the container with id ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.596635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.596741 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.596904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.628247 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.645900 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.697916 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.901643 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.904050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.907226 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.909446 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.921135 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:34 crc kubenswrapper[4760]: [-]has-synced failed: reason withheld Jan 21 15:49:34 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:34 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:34 crc kubenswrapper[4760]: I0121 15:49:34.921184 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:34.994670 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4c156f4-f6be-46db-a27b-59da59600e26" containerID="44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699" exitCode=0 Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:34.994785 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699"} Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.003715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.003771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.004000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.006640 4760 generic.go:334] "Generic (PLEG): container finished" podID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerID="e371a3a74457ac6d6003019cd4ef6160788cd3352bedd25a6047dad48c01aa1a" exitCode=0 Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.007198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"e371a3a74457ac6d6003019cd4ef6160788cd3352bedd25a6047dad48c01aa1a"} Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.007272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerStarted","Data":"ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b"} Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.107317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.108110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.108275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.110507 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.111710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.160460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"redhat-operators-26f8s\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.174124 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.231715 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.248396 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.276148 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.277210 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: W0121 15:49:35.284076 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeae3b2cf_b59a_4ff2_801e_e6a6be3692dc.slice/crio-9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223 WatchSource:0}: Error finding container 9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223: Status 404 returned error can't find the container with id 9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223 Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.292184 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.319907 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.319958 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.320056 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.424400 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.424676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.424698 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.425388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.426303 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.450273 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"redhat-operators-hckt4\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.526456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.550515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.550781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.551616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.573915 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.597004 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.646554 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.695550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.740634 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.751913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.917982 4760 patch_prober.go:28] interesting pod/router-default-5444994796-gfd2m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 15:49:35 crc kubenswrapper[4760]: [+]has-synced ok Jan 21 15:49:35 crc kubenswrapper[4760]: [+]process-running ok Jan 21 15:49:35 crc kubenswrapper[4760]: healthz check failed Jan 21 15:49:35 crc kubenswrapper[4760]: I0121 15:49:35.918356 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-gfd2m" podUID="bed95d17-1666-4ad0-afea-faa4a683ed81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.018985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerStarted","Data":"541a7f871278d05ad698fda2df7aa406ca08b0a08158989a26312b95b2c447f8"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.022447 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerStarted","Data":"d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.022488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerStarted","Data":"9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.025159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerStarted","Data":"c3b70561900303071a0074f651fc6a675145ef0989ae8cefad74961433db02fa"} Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.442936 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:49:36 crc kubenswrapper[4760]: W0121 15:49:36.484986 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf73bc16d_d078_43de_a21d_f79b9529f2dc.slice/crio-cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14 WatchSource:0}: Error finding container cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14: Status 404 returned error can't find the container with id cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14 Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.919737 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:36 crc kubenswrapper[4760]: I0121 15:49:36.923732 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-gfd2m" Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.054706 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"08a0dd44e17ece3329978f0a9781113fbf12920898cb43c83578cc9f278f30fd"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.092731 4760 generic.go:334] "Generic (PLEG): container finished" podID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerID="d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f" exitCode=0 Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.092840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.104208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerStarted","Data":"f2c86218b79294a10ed5eec7c03531a65dfadfe350197efcaebb5a3fa412e38f"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.115909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6331db3642cbdea03cf1553643d574007a4a3a7adbf9ba9bbb7ba48d69f39583"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.128244 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"79bfa71fdb554bced05474d8d6cc4785b322b931430317b29b834b69f9022c4d"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.141196 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerStarted","Data":"cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.149025 4760 generic.go:334] "Generic (PLEG): container finished" podID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerID="a3bbfc5e6a85022bc527915cba1d4de9bfd61b5258e677882ba965ba0f9aec02" exitCode=0 Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.149915 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"a3bbfc5e6a85022bc527915cba1d4de9bfd61b5258e677882ba965ba0f9aec02"} Jan 21 15:49:37 crc kubenswrapper[4760]: I0121 15:49:37.162930 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.162874569 podStartE2EDuration="3.162874569s" podCreationTimestamp="2026-01-21 15:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:37.161434998 +0000 UTC m=+147.829204576" watchObservedRunningTime="2026-01-21 15:49:37.162874569 +0000 UTC m=+147.830644147" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.116006 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.116752 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.118811 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.120231 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.134942 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.193202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.194186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.201013 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"60b0843dbddd5d6a6a31fc21ba2001c53a1818b74d36802763c6bf3de18a61c4"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.213047 4760 generic.go:334] "Generic (PLEG): container finished" podID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerID="f2c86218b79294a10ed5eec7c03531a65dfadfe350197efcaebb5a3fa412e38f" exitCode=0 Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.213135 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerDied","Data":"f2c86218b79294a10ed5eec7c03531a65dfadfe350197efcaebb5a3fa412e38f"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.253943 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ba08993e695631b7ef29dab38613f40415eabc79cb14c11c8cbd487a2e62c031"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.295602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.295681 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.295795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.296365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5c252006586ca695c86f5cc44d4619fea4bee56edb8a9e1c914010dcd17c90e9"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.307119 4760 generic.go:334] "Generic (PLEG): container finished" podID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerID="11d48db9fe70d759230c200b6d1811336c60c722ef8e03f9d8379e788bf6b2ab" exitCode=0 Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.307214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"11d48db9fe70d759230c200b6d1811336c60c722ef8e03f9d8379e788bf6b2ab"} Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.339634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.442707 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:38 crc kubenswrapper[4760]: I0121 15:49:38.883173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.039904 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-sztm4" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.349245 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerStarted","Data":"8e2c7280e0735778ecfba58fa22b210dd5b88f01d583dca23b4cf0d0453e79c6"} Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.350388 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.840227 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.950030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") pod \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.950090 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") pod \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\" (UID: \"b772443e-4487-4b78-8dce-f66b7bd1e6fc\") " Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.950476 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b772443e-4487-4b78-8dce-f66b7bd1e6fc" (UID: "b772443e-4487-4b78-8dce-f66b7bd1e6fc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:49:39 crc kubenswrapper[4760]: I0121 15:49:39.982706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b772443e-4487-4b78-8dce-f66b7bd1e6fc" (UID: "b772443e-4487-4b78-8dce-f66b7bd1e6fc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.052474 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.052514 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b772443e-4487-4b78-8dce-f66b7bd1e6fc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.390203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b772443e-4487-4b78-8dce-f66b7bd1e6fc","Type":"ContainerDied","Data":"c3b70561900303071a0074f651fc6a675145ef0989ae8cefad74961433db02fa"} Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.390272 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b70561900303071a0074f651fc6a675145ef0989ae8cefad74961433db02fa" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.390391 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.420852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerStarted","Data":"aca9116bcb6ec44764dd4ca0579496d5265465c401d1128ac9733b28fcb9d869"} Jan 21 15:49:40 crc kubenswrapper[4760]: I0121 15:49:40.440968 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.440939779 podStartE2EDuration="2.440939779s" podCreationTimestamp="2026-01-21 15:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:49:40.4364689 +0000 UTC m=+151.104238478" watchObservedRunningTime="2026-01-21 15:49:40.440939779 +0000 UTC m=+151.108709357" Jan 21 15:49:41 crc kubenswrapper[4760]: I0121 15:49:41.442276 4760 generic.go:334] "Generic (PLEG): container finished" podID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerID="aca9116bcb6ec44764dd4ca0579496d5265465c401d1128ac9733b28fcb9d869" exitCode=0 Jan 21 15:49:41 crc kubenswrapper[4760]: I0121 15:49:41.442540 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerDied","Data":"aca9116bcb6ec44764dd4ca0579496d5265465c401d1128ac9733b28fcb9d869"} Jan 21 15:49:43 crc kubenswrapper[4760]: I0121 15:49:43.222696 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gqcsb" Jan 21 15:49:43 crc kubenswrapper[4760]: I0121 15:49:43.387298 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:43 crc kubenswrapper[4760]: I0121 15:49:43.391775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.530296 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.532441 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" containerID="cri-o://9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379" gracePeriod=30 Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.554043 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:49:45 crc kubenswrapper[4760]: I0121 15:49:45.554519 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" containerID="cri-o://ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3" gracePeriod=30 Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.496309 4760 generic.go:334] "Generic (PLEG): container finished" podID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerID="9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379" exitCode=0 Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.496374 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerDied","Data":"9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379"} Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.499284 4760 generic.go:334] "Generic (PLEG): container finished" podID="8675f6e4-a233-45db-8916-68947da2554c" containerID="ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3" exitCode=0 Jan 21 15:49:46 crc kubenswrapper[4760]: I0121 15:49:46.499317 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerDied","Data":"ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3"} Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.763501 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.781970 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0a4b6476-7a89-41b4-b918-5628f622c7c1-metrics-certs\") pod \"network-metrics-daemon-bbr8l\" (UID: \"0a4b6476-7a89-41b4-b918-5628f622c7c1\") " pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.939102 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bbr8l" Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.946579 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:49:50 crc kubenswrapper[4760]: I0121 15:49:50.946754 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:49:52 crc kubenswrapper[4760]: I0121 15:49:52.335173 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:49:53 crc kubenswrapper[4760]: I0121 15:49:53.332762 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 15:49:53 crc kubenswrapper[4760]: I0121 15:49:53.333454 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 15:49:54 crc kubenswrapper[4760]: I0121 15:49:54.127833 4760 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-s7vh9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:49:54 crc kubenswrapper[4760]: I0121 15:49:54.127930 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.457263 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.467109 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525434 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") pod \"e79a3f61-7483-489b-b2c7-a200a92b3641\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525517 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525556 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") pod \"e79a3f61-7483-489b-b2c7-a200a92b3641\" (UID: \"e79a3f61-7483-489b-b2c7-a200a92b3641\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525744 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525758 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e79a3f61-7483-489b-b2c7-a200a92b3641" (UID: "e79a3f61-7483-489b-b2c7-a200a92b3641"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.525770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") pod \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\" (UID: \"27643829-9abc-4f6c-a6e9-5f0c86eb7594\") " Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526062 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e79a3f61-7483-489b-b2c7-a200a92b3641-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526813 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config" (OuterVolumeSpecName: "config") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.526825 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca" (OuterVolumeSpecName: "client-ca") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.532524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.532845 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e79a3f61-7483-489b-b2c7-a200a92b3641" (UID: "e79a3f61-7483-489b-b2c7-a200a92b3641"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.532897 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd" (OuterVolumeSpecName: "kube-api-access-k25rd") pod "27643829-9abc-4f6c-a6e9-5f0c86eb7594" (UID: "27643829-9abc-4f6c-a6e9-5f0c86eb7594"). InnerVolumeSpecName "kube-api-access-k25rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.568418 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.568424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"e79a3f61-7483-489b-b2c7-a200a92b3641","Type":"ContainerDied","Data":"8e2c7280e0735778ecfba58fa22b210dd5b88f01d583dca23b4cf0d0453e79c6"} Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.568523 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e2c7280e0735778ecfba58fa22b210dd5b88f01d583dca23b4cf0d0453e79c6" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.572003 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" event={"ID":"27643829-9abc-4f6c-a6e9-5f0c86eb7594","Type":"ContainerDied","Data":"b426f095c9eeebbc73d791d28bf5018ba0416025738bc645f659c6cca9d8374b"} Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.572088 4760 scope.go:117] "RemoveContainer" containerID="9733408d618c3d3f112b42748587eff2b4ac3646c57ed2c7de3d5c0e01526379" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.572094 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-s7vh9" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.612104 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.615043 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-s7vh9"] Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627469 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79a3f61-7483-489b-b2c7-a200a92b3641-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627514 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k25rd\" (UniqueName: \"kubernetes.io/projected/27643829-9abc-4f6c-a6e9-5f0c86eb7594-kube-api-access-k25rd\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627531 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627549 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627560 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27643829-9abc-4f6c-a6e9-5f0c86eb7594-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.627578 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27643829-9abc-4f6c-a6e9-5f0c86eb7594-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:49:55 crc kubenswrapper[4760]: I0121 15:49:55.632562 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" path="/var/lib/kubelet/pods/27643829-9abc-4f6c-a6e9-5f0c86eb7594/volumes" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238468 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:49:58 crc kubenswrapper[4760]: E0121 15:49:58.238848 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238869 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: E0121 15:49:58.238889 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238900 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" Jan 21 15:49:58 crc kubenswrapper[4760]: E0121 15:49:58.238911 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.238919 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239109 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="27643829-9abc-4f6c-a6e9-5f0c86eb7594" containerName="controller-manager" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239125 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79a3f61-7483-489b-b2c7-a200a92b3641" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239136 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b772443e-4487-4b78-8dce-f66b7bd1e6fc" containerName="pruner" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.239727 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.241282 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.242967 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.243539 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.243937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.244053 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.244355 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.244649 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.249097 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362632 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362772 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.362994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464300 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464526 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.464650 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.465828 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.467347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.479309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:58 crc kubenswrapper[4760]: I0121 15:49:58.481696 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:59 crc kubenswrapper[4760]: I0121 15:49:59.721840 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"controller-manager-99594499c-294mp\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:49:59 crc kubenswrapper[4760]: I0121 15:49:59.763260 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:03 crc kubenswrapper[4760]: I0121 15:50:03.990195 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvxcc" Jan 21 15:50:04 crc kubenswrapper[4760]: I0121 15:50:04.333198 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:50:04 crc kubenswrapper[4760]: I0121 15:50:04.333307 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:05 crc kubenswrapper[4760]: I0121 15:50:05.460631 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.712960 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.714696 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.725832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.725843 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.731436 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.802516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.802700 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.904458 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.904524 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.904641 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:13 crc kubenswrapper[4760]: I0121 15:50:13.925698 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.038037 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.206538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.242131 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:50:14 crc kubenswrapper[4760]: E0121 15:50:14.242746 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.242765 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.242889 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.243387 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.258372 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309547 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309592 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309630 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309682 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") pod \"8675f6e4-a233-45db-8916-68947da2554c\" (UID: \"8675f6e4-a233-45db-8916-68947da2554c\") " Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.309905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.310002 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.310945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.311125 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config" (OuterVolumeSpecName: "config") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.316800 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx" (OuterVolumeSpecName: "kube-api-access-lbccx") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "kube-api-access-lbccx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.321997 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8675f6e4-a233-45db-8916-68947da2554c" (UID: "8675f6e4-a233-45db-8916-68947da2554c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.333927 4760 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cxv6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.333987 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" podUID="8675f6e4-a233-45db-8916-68947da2554c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.401725 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bbr8l"] Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411362 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411392 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411438 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411534 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411556 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8675f6e4-a233-45db-8916-68947da2554c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411569 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8675f6e4-a233-45db-8916-68947da2554c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.411582 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbccx\" (UniqueName: \"kubernetes.io/projected/8675f6e4-a233-45db-8916-68947da2554c-kube-api-access-lbccx\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.413965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.415278 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.415389 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.428427 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"route-controller-manager-75fdddbbb-hms6q\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.575484 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.687390 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" event={"ID":"8675f6e4-a233-45db-8916-68947da2554c","Type":"ContainerDied","Data":"a4442532034d55aec7694e2aae62d46cc806fbd6825379546e85cefe87fe5880"} Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.687443 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j" Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.722034 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:50:14 crc kubenswrapper[4760]: I0121 15:50:14.726314 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cxv6j"] Jan 21 15:50:15 crc kubenswrapper[4760]: I0121 15:50:15.630380 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8675f6e4-a233-45db-8916-68947da2554c" path="/var/lib/kubelet/pods/8675f6e4-a233-45db-8916-68947da2554c/volumes" Jan 21 15:50:16 crc kubenswrapper[4760]: I0121 15:50:16.082622 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 15:50:18 crc kubenswrapper[4760]: E0121 15:50:18.314650 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:50:18 crc kubenswrapper[4760]: E0121 15:50:18.314867 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4t8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x5x5q_openshift-marketplace(4d5712fb-d149-4923-bd66-7ec385c7508d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:18 crc kubenswrapper[4760]: E0121 15:50:18.316001 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x5x5q" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.315955 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.321582 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.324814 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.402156 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.402206 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.402238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.503755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.503813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.503843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.504271 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.504318 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.533748 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"installer-9-crc\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:19 crc kubenswrapper[4760]: I0121 15:50:19.643408 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:50:20 crc kubenswrapper[4760]: I0121 15:50:20.946052 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:20 crc kubenswrapper[4760]: I0121 15:50:20.946126 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:21 crc kubenswrapper[4760]: E0121 15:50:21.871436 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:50:21 crc kubenswrapper[4760]: E0121 15:50:21.871617 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vb7qd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hckt4_openshift-marketplace(f73bc16d-d078-43de-a21d-f79b9529f2dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:21 crc kubenswrapper[4760]: E0121 15:50:21.872886 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" Jan 21 15:50:25 crc kubenswrapper[4760]: E0121 15:50:25.390067 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:50:25 crc kubenswrapper[4760]: E0121 15:50:25.390611 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5s5tz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6xd6s_openshift-marketplace(0416bf01-ef39-4a1b-b8ca-8e02ea2882ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:25 crc kubenswrapper[4760]: E0121 15:50:25.391760 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.098435 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.099825 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zrnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6ndfg_openshift-marketplace(eae3b2cf-b59a-4ff2-801e-e6a6be3692dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.101194 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6ndfg" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.370137 4760 scope.go:117] "RemoveContainer" containerID="ad826b1f0424c95593a8dfd54d176b790a48e64c979f43d669eee1fc5fd67ad3" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.389677 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x5x5q" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.397997 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.398024 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.737160 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:27 crc kubenswrapper[4760]: W0121 15:50:27.745908 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01e67c05_5dcd_4dc4_bd57_177c8b1fc2bf.slice/crio-7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12 WatchSource:0}: Error finding container 7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12: Status 404 returned error can't find the container with id 7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12 Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.767446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerStarted","Data":"7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12"} Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.768653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" event={"ID":"0a4b6476-7a89-41b4-b918-5628f622c7c1","Type":"ContainerStarted","Data":"572b43d8e1ddd7e45e0fcc3500470e82a7176c150ce0a373e3cfa95b1c0d41d2"} Jan 21 15:50:27 crc kubenswrapper[4760]: E0121 15:50:27.772692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6ndfg" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.856117 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.866966 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:50:27 crc kubenswrapper[4760]: I0121 15:50:27.872188 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 15:50:28 crc kubenswrapper[4760]: I0121 15:50:28.777219 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerStarted","Data":"583defa460c7acc5e85d4d14ce60f028b35e910eff146d63b496377f7bb34a68"} Jan 21 15:50:28 crc kubenswrapper[4760]: I0121 15:50:28.779227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerStarted","Data":"11787bcccbc8568a38fbd1c4f817ef453e9c0f9b11aa51b3a63e6d984418f3b9"} Jan 21 15:50:28 crc kubenswrapper[4760]: I0121 15:50:28.781224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerStarted","Data":"b3bcebe743bb41d77d81c981ad9a7d61095377e25748706cd350e26e644412da"} Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.477495 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.477918 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ff6j9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bcf5p_openshift-marketplace(ddcb6012-213a-4989-8cb3-60fc763a8255): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.479431 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bcf5p" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.600548 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.600722 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbf9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5nbxl_openshift-marketplace(b4c156f4-f6be-46db-a27b-59da59600e26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.602115 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5nbxl" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.793121 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerStarted","Data":"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.793522 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.802011 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerStarted","Data":"538f4e14c5c439a69f565cbbfdf51a679d826b8180a462bb91225371feef8312"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.815417 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.819716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" event={"ID":"0a4b6476-7a89-41b4-b918-5628f622c7c1","Type":"ContainerStarted","Data":"d018438e0ed56c5463213f4faef19885edb10342c4428c2f4ccda7e444a3b6df"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.826567 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" podStartSLOduration=24.826540596 podStartE2EDuration="24.826540596s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:29.824538665 +0000 UTC m=+200.492308253" watchObservedRunningTime="2026-01-21 15:50:29.826540596 +0000 UTC m=+200.494310184" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.835832 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerStarted","Data":"bfd65a2234626fed5c5b5803d31eca0388725750f4abbbcb9b5e4afe94a4f356"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.839208 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-99594499c-294mp" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" containerID="cri-o://3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" gracePeriod=30 Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.839534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerStarted","Data":"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3"} Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.840193 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.840308 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bcf5p" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.840687 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5nbxl" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.846609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.897025 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.897531 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgkpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-26f8s_openshift-marketplace(f08c19d6-0704-4562-8e0b-aa1d20161f70): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.899376 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.931797 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=10.931776957 podStartE2EDuration="10.931776957s" podCreationTimestamp="2026-01-21 15:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:29.928427651 +0000 UTC m=+200.596197239" watchObservedRunningTime="2026-01-21 15:50:29.931776957 +0000 UTC m=+200.599546525" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.965199 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.965393 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbvgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p6nql_openshift-marketplace(ba544d41-3795-476a-ba4e-b9f4dcf8bb5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 15:50:29 crc kubenswrapper[4760]: E0121 15:50:29.966632 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p6nql" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" Jan 21 15:50:29 crc kubenswrapper[4760]: I0121 15:50:29.996225 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.996205981 podStartE2EDuration="16.996205981s" podCreationTimestamp="2026-01-21 15:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:29.9930535 +0000 UTC m=+200.660823078" watchObservedRunningTime="2026-01-21 15:50:29.996205981 +0000 UTC m=+200.663975559" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.012389 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-99594499c-294mp" podStartSLOduration=45.012368366 podStartE2EDuration="45.012368366s" podCreationTimestamp="2026-01-21 15:49:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:30.010247221 +0000 UTC m=+200.678016829" watchObservedRunningTime="2026-01-21 15:50:30.012368366 +0000 UTC m=+200.680137944" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.207877 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.234925 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.235188 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.235206 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.235388 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerName="controller-manager" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.235901 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.244103 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261383 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261487 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.261626 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") pod \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\" (UID: \"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf\") " Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.262288 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.262310 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.262443 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config" (OuterVolumeSpecName: "config") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.267856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.267977 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v" (OuterVolumeSpecName: "kube-api-access-jzb5v") pod "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" (UID: "01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf"). InnerVolumeSpecName "kube-api-access-jzb5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.362957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363004 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363055 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363094 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363161 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363172 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363182 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzb5v\" (UniqueName: \"kubernetes.io/projected/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-kube-api-access-jzb5v\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363190 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.363198 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465177 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.465262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.466786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.470082 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.472434 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.473088 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.487113 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"controller-manager-56d695b7b9-28gwl\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.592752 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.788671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.847208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bbr8l" event={"ID":"0a4b6476-7a89-41b4-b918-5628f622c7c1","Type":"ContainerStarted","Data":"05fa1c99bf6a5eef7ed8ee519d4ba5082f44ce077f1db95a556465ccedb8df6b"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.848553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerStarted","Data":"8dba43c5c93d0c3bd3a4831af727dbf207bd86f1b155a49887596646532a3a0e"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.850901 4760 generic.go:334] "Generic (PLEG): container finished" podID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerID="bfd65a2234626fed5c5b5803d31eca0388725750f4abbbcb9b5e4afe94a4f356" exitCode=0 Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.850966 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerDied","Data":"bfd65a2234626fed5c5b5803d31eca0388725750f4abbbcb9b5e4afe94a4f356"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.852889 4760 generic.go:334] "Generic (PLEG): container finished" podID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" exitCode=0 Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.853707 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99594499c-294mp" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.854678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerDied","Data":"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3"} Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.854728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99594499c-294mp" event={"ID":"01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf","Type":"ContainerDied","Data":"7294d2ef9ec6519cfe44d8c60a3393d59e28acbaa2ba26faa593acd9ab082d12"} Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.854708 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p6nql" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.854771 4760 scope.go:117] "RemoveContainer" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.856795 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.870065 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bbr8l" podStartSLOduration=182.87003262 podStartE2EDuration="3m2.87003262s" podCreationTimestamp="2026-01-21 15:47:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:30.864707953 +0000 UTC m=+201.532477531" watchObservedRunningTime="2026-01-21 15:50:30.87003262 +0000 UTC m=+201.537802198" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.878862 4760 scope.go:117] "RemoveContainer" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" Jan 21 15:50:30 crc kubenswrapper[4760]: E0121 15:50:30.880520 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3\": container with ID starting with 3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3 not found: ID does not exist" containerID="3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.880582 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3"} err="failed to get container status \"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3\": rpc error: code = NotFound desc = could not find container \"3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3\": container with ID starting with 3201c61b4bfe263126323ccee134bbf3b929e83b495fa648bc50fb416b5d3cc3 not found: ID does not exist" Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.938299 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:30 crc kubenswrapper[4760]: I0121 15:50:30.951967 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-99594499c-294mp"] Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.632207 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf" path="/var/lib/kubelet/pods/01e67c05-5dcd-4dc4-bd57-177c8b1fc2bf/volumes" Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.863164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerStarted","Data":"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7"} Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.866552 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.870121 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:50:31 crc kubenswrapper[4760]: I0121 15:50:31.884102 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" podStartSLOduration=26.884081019 podStartE2EDuration="26.884081019s" podCreationTimestamp="2026-01-21 15:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:50:31.880972109 +0000 UTC m=+202.548741717" watchObservedRunningTime="2026-01-21 15:50:31.884081019 +0000 UTC m=+202.551850597" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.111843 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191214 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") pod \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191316 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" (UID: "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191452 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") pod \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\" (UID: \"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c\") " Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.191752 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.197118 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" (UID: "50ff8c4c-86ac-4abe-9dbc-69a277a3e34c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.293921 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50ff8c4c-86ac-4abe-9dbc-69a277a3e34c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.869207 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.870853 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"50ff8c4c-86ac-4abe-9dbc-69a277a3e34c","Type":"ContainerDied","Data":"b3bcebe743bb41d77d81c981ad9a7d61095377e25748706cd350e26e644412da"} Jan 21 15:50:32 crc kubenswrapper[4760]: I0121 15:50:32.870932 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3bcebe743bb41d77d81c981ad9a7d61095377e25748706cd350e26e644412da" Jan 21 15:50:39 crc kubenswrapper[4760]: I0121 15:50:39.909274 4760 generic.go:334] "Generic (PLEG): container finished" podID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerID="ac9f293c7f8e0b0eb98c0c533b04b14ae706a9320a5259557538a6b36412667e" exitCode=0 Jan 21 15:50:39 crc kubenswrapper[4760]: I0121 15:50:39.909371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"ac9f293c7f8e0b0eb98c0c533b04b14ae706a9320a5259557538a6b36412667e"} Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.918129 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerID="feb9f59c9bf1804a69669d8bb543743e133ae9729c77c2d5bc787139ceb12cfb" exitCode=0 Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.918214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"feb9f59c9bf1804a69669d8bb543743e133ae9729c77c2d5bc787139ceb12cfb"} Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.924823 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerStarted","Data":"2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5"} Jan 21 15:50:40 crc kubenswrapper[4760]: I0121 15:50:40.963487 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6ndfg" podStartSLOduration=3.788022493 podStartE2EDuration="1m6.963465274s" podCreationTimestamp="2026-01-21 15:49:34 +0000 UTC" firstStartedPulling="2026-01-21 15:49:37.119822123 +0000 UTC m=+147.787591691" lastFinishedPulling="2026-01-21 15:50:40.295264894 +0000 UTC m=+210.963034472" observedRunningTime="2026-01-21 15:50:40.957981123 +0000 UTC m=+211.625750701" watchObservedRunningTime="2026-01-21 15:50:40.963465274 +0000 UTC m=+211.631234852" Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.937092 4760 generic.go:334] "Generic (PLEG): container finished" podID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerID="a10aa316b83a3f9c2d25d428da60f4f8dc9a314c4b9b7112ace20ddbfd8e0575" exitCode=0 Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.937160 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"a10aa316b83a3f9c2d25d428da60f4f8dc9a314c4b9b7112ace20ddbfd8e0575"} Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.940662 4760 generic.go:334] "Generic (PLEG): container finished" podID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerID="81fe443766167fda35b8c4567b9863b83448560bf913712358dc824f0e37e5eb" exitCode=0 Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.940732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"81fe443766167fda35b8c4567b9863b83448560bf913712358dc824f0e37e5eb"} Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.945339 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerStarted","Data":"190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64"} Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.952375 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4c156f4-f6be-46db-a27b-59da59600e26" containerID="fdc6b5906b8f47c6da41a02a991bb86865d318d619b9ba0965311a536858f30f" exitCode=0 Jan 21 15:50:42 crc kubenswrapper[4760]: I0121 15:50:42.952441 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"fdc6b5906b8f47c6da41a02a991bb86865d318d619b9ba0965311a536858f30f"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.029431 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x5x5q" podStartSLOduration=2.8372066240000002 podStartE2EDuration="1m11.029411852s" podCreationTimestamp="2026-01-21 15:49:32 +0000 UTC" firstStartedPulling="2026-01-21 15:49:33.967099901 +0000 UTC m=+144.634869479" lastFinishedPulling="2026-01-21 15:50:42.159305129 +0000 UTC m=+212.827074707" observedRunningTime="2026-01-21 15:50:43.025588084 +0000 UTC m=+213.693357672" watchObservedRunningTime="2026-01-21 15:50:43.029411852 +0000 UTC m=+213.697181430" Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.959550 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerStarted","Data":"8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.962852 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerStarted","Data":"bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.967097 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerStarted","Data":"6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c"} Jan 21 15:50:43 crc kubenswrapper[4760]: I0121 15:50:43.984036 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5nbxl" podStartSLOduration=3.5713530159999998 podStartE2EDuration="1m11.984012024s" podCreationTimestamp="2026-01-21 15:49:32 +0000 UTC" firstStartedPulling="2026-01-21 15:49:34.998547014 +0000 UTC m=+145.666316582" lastFinishedPulling="2026-01-21 15:50:43.411206012 +0000 UTC m=+214.078975590" observedRunningTime="2026-01-21 15:50:43.981675324 +0000 UTC m=+214.649444912" watchObservedRunningTime="2026-01-21 15:50:43.984012024 +0000 UTC m=+214.651781602" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.036454 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xd6s" podStartSLOduration=2.675395418 podStartE2EDuration="1m11.036433859s" podCreationTimestamp="2026-01-21 15:49:33 +0000 UTC" firstStartedPulling="2026-01-21 15:49:35.013193092 +0000 UTC m=+145.680962670" lastFinishedPulling="2026-01-21 15:50:43.374231533 +0000 UTC m=+214.042001111" observedRunningTime="2026-01-21 15:50:44.015062811 +0000 UTC m=+214.682832389" watchObservedRunningTime="2026-01-21 15:50:44.036433859 +0000 UTC m=+214.704203437" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.036826 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hckt4" podStartSLOduration=3.902262834 podStartE2EDuration="1m9.036820779s" podCreationTimestamp="2026-01-21 15:49:35 +0000 UTC" firstStartedPulling="2026-01-21 15:49:38.311868981 +0000 UTC m=+148.979638559" lastFinishedPulling="2026-01-21 15:50:43.446426926 +0000 UTC m=+214.114196504" observedRunningTime="2026-01-21 15:50:44.035910266 +0000 UTC m=+214.703679844" watchObservedRunningTime="2026-01-21 15:50:44.036820779 +0000 UTC m=+214.704590357" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.201578 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.201656 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.647532 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.647601 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.697619 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.978833 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerStarted","Data":"34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9"} Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.981545 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerID="633996cf50a456325703c67cb22ee42dd93c0f4af97d123ece106067febb7014" exitCode=0 Jan 21 15:50:44 crc kubenswrapper[4760]: I0121 15:50:44.981596 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"633996cf50a456325703c67cb22ee42dd93c0f4af97d123ece106067febb7014"} Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.298381 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" probeResult="failure" output=< Jan 21 15:50:45 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 15:50:45 crc kubenswrapper[4760]: > Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.597597 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.597656 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.990773 4760 generic.go:334] "Generic (PLEG): container finished" podID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerID="34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9" exitCode=0 Jan 21 15:50:45 crc kubenswrapper[4760]: I0121 15:50:45.990886 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9"} Jan 21 15:50:46 crc kubenswrapper[4760]: I0121 15:50:46.639938 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" probeResult="failure" output=< Jan 21 15:50:46 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 15:50:46 crc kubenswrapper[4760]: > Jan 21 15:50:46 crc kubenswrapper[4760]: I0121 15:50:46.998662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerStarted","Data":"9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19"} Jan 21 15:50:47 crc kubenswrapper[4760]: I0121 15:50:47.000969 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerStarted","Data":"315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3"} Jan 21 15:50:49 crc kubenswrapper[4760]: I0121 15:50:49.013308 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerID="9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19" exitCode=0 Jan 21 15:50:49 crc kubenswrapper[4760]: I0121 15:50:49.013473 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19"} Jan 21 15:50:49 crc kubenswrapper[4760]: I0121 15:50:49.056725 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p6nql" podStartSLOduration=5.940708092 podStartE2EDuration="1m18.056699589s" podCreationTimestamp="2026-01-21 15:49:31 +0000 UTC" firstStartedPulling="2026-01-21 15:49:33.944374962 +0000 UTC m=+144.612144540" lastFinishedPulling="2026-01-21 15:50:46.060366459 +0000 UTC m=+216.728136037" observedRunningTime="2026-01-21 15:50:49.051475585 +0000 UTC m=+219.719245163" watchObservedRunningTime="2026-01-21 15:50:49.056699589 +0000 UTC m=+219.724469167" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.947046 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.948391 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.948646 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.949759 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:50:50 crc kubenswrapper[4760]: I0121 15:50:50.950025 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99" gracePeriod=600 Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.268551 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.268915 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.314684 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.515690 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.516105 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.552660 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.637474 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.638076 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:52 crc kubenswrapper[4760]: I0121 15:50:52.678794 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:53 crc kubenswrapper[4760]: I0121 15:50:53.080857 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:50:53 crc kubenswrapper[4760]: I0121 15:50:53.382429 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:53 crc kubenswrapper[4760]: I0121 15:50:53.382497 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.051269 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99" exitCode=0 Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.051400 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99"} Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.247698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.289536 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.686917 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:50:54 crc kubenswrapper[4760]: I0121 15:50:54.872126 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:50:55 crc kubenswrapper[4760]: I0121 15:50:55.069476 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:50:55 crc kubenswrapper[4760]: I0121 15:50:55.645918 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:55 crc kubenswrapper[4760]: I0121 15:50:55.682669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:50:56 crc kubenswrapper[4760]: I0121 15:50:56.075826 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x5x5q" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" containerID="cri-o://190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64" gracePeriod=2 Jan 21 15:50:56 crc kubenswrapper[4760]: I0121 15:50:56.077136 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5nbxl" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" containerID="cri-o://8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f" gracePeriod=2 Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.091704 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerID="190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64" exitCode=0 Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.091802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64"} Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.267888 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:50:57 crc kubenswrapper[4760]: I0121 15:50:57.268391 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6ndfg" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" containerID="cri-o://2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5" gracePeriod=2 Jan 21 15:50:58 crc kubenswrapper[4760]: I0121 15:50:58.100481 4760 generic.go:334] "Generic (PLEG): container finished" podID="b4c156f4-f6be-46db-a27b-59da59600e26" containerID="8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f" exitCode=0 Jan 21 15:50:58 crc kubenswrapper[4760]: I0121 15:50:58.100535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f"} Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.286452 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.397775 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") pod \"b4c156f4-f6be-46db-a27b-59da59600e26\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.397925 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") pod \"b4c156f4-f6be-46db-a27b-59da59600e26\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.397962 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") pod \"b4c156f4-f6be-46db-a27b-59da59600e26\" (UID: \"b4c156f4-f6be-46db-a27b-59da59600e26\") " Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.403090 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities" (OuterVolumeSpecName: "utilities") pod "b4c156f4-f6be-46db-a27b-59da59600e26" (UID: "b4c156f4-f6be-46db-a27b-59da59600e26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.405979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j" (OuterVolumeSpecName: "kube-api-access-zbf9j") pod "b4c156f4-f6be-46db-a27b-59da59600e26" (UID: "b4c156f4-f6be-46db-a27b-59da59600e26"). InnerVolumeSpecName "kube-api-access-zbf9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.449684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4c156f4-f6be-46db-a27b-59da59600e26" (UID: "b4c156f4-f6be-46db-a27b-59da59600e26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.499347 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.499386 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbf9j\" (UniqueName: \"kubernetes.io/projected/b4c156f4-f6be-46db-a27b-59da59600e26-kube-api-access-zbf9j\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.499399 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4c156f4-f6be-46db-a27b-59da59600e26-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.864708 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:50:59 crc kubenswrapper[4760]: I0121 15:50:59.865072 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hckt4" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" containerID="cri-o://6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c" gracePeriod=2 Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.119972 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nbxl" event={"ID":"b4c156f4-f6be-46db-a27b-59da59600e26","Type":"ContainerDied","Data":"0b0a7331696e324346519fa26d2e7eaf67a45bf400da7b1769b77d9a40dd4ed9"} Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.120410 4760 scope.go:117] "RemoveContainer" containerID="8b104007e35e5b57b8d765bfa98fe9732c9446e7e2bdd1c4cc4fa1e78b507e1f" Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.120411 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nbxl" Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.122675 4760 generic.go:334] "Generic (PLEG): container finished" podID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerID="2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5" exitCode=0 Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.122863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5"} Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.139597 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:51:00 crc kubenswrapper[4760]: I0121 15:51:00.142315 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5nbxl"] Jan 21 15:51:01 crc kubenswrapper[4760]: I0121 15:51:01.631698 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" path="/var/lib/kubelet/pods/b4c156f4-f6be-46db-a27b-59da59600e26/volumes" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.443099 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.554777 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") pod \"4d5712fb-d149-4923-bd66-7ec385c7508d\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.554899 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") pod \"4d5712fb-d149-4923-bd66-7ec385c7508d\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.554985 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") pod \"4d5712fb-d149-4923-bd66-7ec385c7508d\" (UID: \"4d5712fb-d149-4923-bd66-7ec385c7508d\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.563478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities" (OuterVolumeSpecName: "utilities") pod "4d5712fb-d149-4923-bd66-7ec385c7508d" (UID: "4d5712fb-d149-4923-bd66-7ec385c7508d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.578856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k" (OuterVolumeSpecName: "kube-api-access-z4t8k") pod "4d5712fb-d149-4923-bd66-7ec385c7508d" (UID: "4d5712fb-d149-4923-bd66-7ec385c7508d"). InnerVolumeSpecName "kube-api-access-z4t8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.613927 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d5712fb-d149-4923-bd66-7ec385c7508d" (UID: "4d5712fb-d149-4923-bd66-7ec385c7508d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.631104 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.656309 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4t8k\" (UniqueName: \"kubernetes.io/projected/4d5712fb-d149-4923-bd66-7ec385c7508d-kube-api-access-z4t8k\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.656365 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.656379 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5712fb-d149-4923-bd66-7ec385c7508d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.757306 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") pod \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.757415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") pod \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.757487 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") pod \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\" (UID: \"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc\") " Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.758252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities" (OuterVolumeSpecName: "utilities") pod "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" (UID: "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.758441 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.761072 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd" (OuterVolumeSpecName: "kube-api-access-7zrnd") pod "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" (UID: "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc"). InnerVolumeSpecName "kube-api-access-7zrnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.786184 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" (UID: "eae3b2cf-b59a-4ff2-801e-e6a6be3692dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.859848 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:02 crc kubenswrapper[4760]: I0121 15:51:02.860195 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zrnd\" (UniqueName: \"kubernetes.io/projected/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc-kube-api-access-7zrnd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.140856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6ndfg" event={"ID":"eae3b2cf-b59a-4ff2-801e-e6a6be3692dc","Type":"ContainerDied","Data":"9c12daa32d80d4059ffd15ea75fd65c89942794d8334765608e63b9adbad6223"} Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.140899 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6ndfg" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.142994 4760 generic.go:334] "Generic (PLEG): container finished" podID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerID="6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c" exitCode=0 Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.143069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c"} Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.144744 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5x5q" event={"ID":"4d5712fb-d149-4923-bd66-7ec385c7508d","Type":"ContainerDied","Data":"b49eabd9c40dfcff1229e3fce7a175dcd666f9a87becb64c24e0cea1a2f942b3"} Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.144836 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5x5q" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.178375 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.182069 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6ndfg"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.190460 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.193273 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x5x5q"] Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.630831 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" path="/var/lib/kubelet/pods/4d5712fb-d149-4923-bd66-7ec385c7508d/volumes" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.631564 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" path="/var/lib/kubelet/pods/eae3b2cf-b59a-4ff2-801e-e6a6be3692dc/volumes" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.728622 4760 scope.go:117] "RemoveContainer" containerID="fdc6b5906b8f47c6da41a02a991bb86865d318d619b9ba0965311a536858f30f" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.794729 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.876596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") pod \"f73bc16d-d078-43de-a21d-f79b9529f2dc\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.876674 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") pod \"f73bc16d-d078-43de-a21d-f79b9529f2dc\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.876789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") pod \"f73bc16d-d078-43de-a21d-f79b9529f2dc\" (UID: \"f73bc16d-d078-43de-a21d-f79b9529f2dc\") " Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.877637 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities" (OuterVolumeSpecName: "utilities") pod "f73bc16d-d078-43de-a21d-f79b9529f2dc" (UID: "f73bc16d-d078-43de-a21d-f79b9529f2dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.891576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd" (OuterVolumeSpecName: "kube-api-access-vb7qd") pod "f73bc16d-d078-43de-a21d-f79b9529f2dc" (UID: "f73bc16d-d078-43de-a21d-f79b9529f2dc"). InnerVolumeSpecName "kube-api-access-vb7qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.978630 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:03 crc kubenswrapper[4760]: I0121 15:51:03.978670 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb7qd\" (UniqueName: \"kubernetes.io/projected/f73bc16d-d078-43de-a21d-f79b9529f2dc-kube-api-access-vb7qd\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.004252 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f73bc16d-d078-43de-a21d-f79b9529f2dc" (UID: "f73bc16d-d078-43de-a21d-f79b9529f2dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.079587 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73bc16d-d078-43de-a21d-f79b9529f2dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.154095 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hckt4" event={"ID":"f73bc16d-d078-43de-a21d-f79b9529f2dc","Type":"ContainerDied","Data":"cdb9d1d40b1eecfd3836a91488864ac34ac2e234164472b070de1fb4b219de14"} Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.154150 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hckt4" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.180245 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.183660 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hckt4"] Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.513295 4760 scope.go:117] "RemoveContainer" containerID="44d5eacb3cae9354e841247c1b95494990dd89e328c447caec11d523a325a699" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.572860 4760 scope.go:117] "RemoveContainer" containerID="2d5efab855a3f6262f9129304876642f0063917251365c70576c77fa23d449b5" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.637639 4760 scope.go:117] "RemoveContainer" containerID="ac9f293c7f8e0b0eb98c0c533b04b14ae706a9320a5259557538a6b36412667e" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.662659 4760 scope.go:117] "RemoveContainer" containerID="d13a45a1552b3e82fff8110314eba47ab12fd71dc125c139385e8e3db8a4d57f" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.679662 4760 scope.go:117] "RemoveContainer" containerID="190d4a44f2824b8d540470d9a160a5e6e060bb64146ab6ea011983d908fc8d64" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.699905 4760 scope.go:117] "RemoveContainer" containerID="feb9f59c9bf1804a69669d8bb543743e133ae9729c77c2d5bc787139ceb12cfb" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.717801 4760 scope.go:117] "RemoveContainer" containerID="298f3443bb35cede4296fc84eb0c6530e2963c3e90fd74ab5b5a237306e9d7f0" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.732581 4760 scope.go:117] "RemoveContainer" containerID="6549dd25316b3c68e02382874063bdbcd8175a0fd9dbc1d9decaba6787417c7c" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.762022 4760 scope.go:117] "RemoveContainer" containerID="81fe443766167fda35b8c4567b9863b83448560bf913712358dc824f0e37e5eb" Jan 21 15:51:04 crc kubenswrapper[4760]: I0121 15:51:04.777895 4760 scope.go:117] "RemoveContainer" containerID="11d48db9fe70d759230c200b6d1811336c60c722ef8e03f9d8379e788bf6b2ab" Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.466803 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.467354 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" containerID="cri-o://d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" gracePeriod=30 Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.570931 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.571207 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" containerID="cri-o://9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" gracePeriod=30 Jan 21 15:51:05 crc kubenswrapper[4760]: I0121 15:51:05.638345 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" path="/var/lib/kubelet/pods/f73bc16d-d078-43de-a21d-f79b9529f2dc/volumes" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.010403 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.021434 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.124776 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125608 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125662 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125689 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125741 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125845 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") pod \"cf940287-2e74-4026-87fa-33ff29056899\" (UID: \"cf940287-2e74-4026-87fa-33ff29056899\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125883 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") pod \"1a0be348-0efb-43ad-812e-da614a51704b\" (UID: \"1a0be348-0efb-43ad-812e-da614a51704b\") " Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.125926 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config" (OuterVolumeSpecName: "config") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.126335 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.126833 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config" (OuterVolumeSpecName: "config") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127717 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127739 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127753 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127761 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0be348-0efb-43ad-812e-da614a51704b-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.127771 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf940287-2e74-4026-87fa-33ff29056899-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136556 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136574 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136640 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6" (OuterVolumeSpecName: "kube-api-access-pvrp6") pod "1a0be348-0efb-43ad-812e-da614a51704b" (UID: "1a0be348-0efb-43ad-812e-da614a51704b"). InnerVolumeSpecName "kube-api-access-pvrp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.136715 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v" (OuterVolumeSpecName: "kube-api-access-pfn4v") pod "cf940287-2e74-4026-87fa-33ff29056899" (UID: "cf940287-2e74-4026-87fa-33ff29056899"). InnerVolumeSpecName "kube-api-access-pfn4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.174840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerStarted","Data":"784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176497 4760 generic.go:334] "Generic (PLEG): container finished" podID="1a0be348-0efb-43ad-812e-da614a51704b" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" exitCode=0 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerDied","Data":"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176514 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176566 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q" event={"ID":"1a0be348-0efb-43ad-812e-da614a51704b","Type":"ContainerDied","Data":"583defa460c7acc5e85d4d14ce60f028b35e910eff146d63b496377f7bb34a68"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.176586 4760 scope.go:117] "RemoveContainer" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.179216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.182923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerStarted","Data":"836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191740 4760 generic.go:334] "Generic (PLEG): container finished" podID="cf940287-2e74-4026-87fa-33ff29056899" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" exitCode=0 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerDied","Data":"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" event={"ID":"cf940287-2e74-4026-87fa-33ff29056899","Type":"ContainerDied","Data":"8dba43c5c93d0c3bd3a4831af727dbf207bd86f1b155a49887596646532a3a0e"} Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.191827 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56d695b7b9-28gwl" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.200010 4760 scope.go:117] "RemoveContainer" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.200489 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc\": container with ID starting with 9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc not found: ID does not exist" containerID="9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.200525 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc"} err="failed to get container status \"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc\": rpc error: code = NotFound desc = could not find container \"9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc\": container with ID starting with 9a5b9988d65f0fed40c684cdec307fa35fc45a51f9ac96c548d7643e7eab4ebc not found: ID does not exist" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.200571 4760 scope.go:117] "RemoveContainer" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.203413 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bcf5p" podStartSLOduration=5.451156606 podStartE2EDuration="1m35.203311945s" podCreationTimestamp="2026-01-21 15:49:31 +0000 UTC" firstStartedPulling="2026-01-21 15:49:33.976474017 +0000 UTC m=+144.644243595" lastFinishedPulling="2026-01-21 15:51:03.728629356 +0000 UTC m=+234.396398934" observedRunningTime="2026-01-21 15:51:06.199639541 +0000 UTC m=+236.867409119" watchObservedRunningTime="2026-01-21 15:51:06.203311945 +0000 UTC m=+236.871081523" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.227739 4760 scope.go:117] "RemoveContainer" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.228341 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7\": container with ID starting with d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7 not found: ID does not exist" containerID="d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228415 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7"} err="failed to get container status \"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7\": rpc error: code = NotFound desc = could not find container \"d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7\": container with ID starting with d67b95ffe112d4d205818b36d27374fc94e60dcc650e6c837c0c3d2c153c5bb7 not found: ID does not exist" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228523 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfn4v\" (UniqueName: \"kubernetes.io/projected/cf940287-2e74-4026-87fa-33ff29056899-kube-api-access-pfn4v\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228547 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvrp6\" (UniqueName: \"kubernetes.io/projected/1a0be348-0efb-43ad-812e-da614a51704b-kube-api-access-pvrp6\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228560 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a0be348-0efb-43ad-812e-da614a51704b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.228572 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf940287-2e74-4026-87fa-33ff29056899-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.236377 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-26f8s" podStartSLOduration=4.875991654 podStartE2EDuration="1m32.236354793s" podCreationTimestamp="2026-01-21 15:49:34 +0000 UTC" firstStartedPulling="2026-01-21 15:49:37.152880567 +0000 UTC m=+147.820650145" lastFinishedPulling="2026-01-21 15:51:04.513243716 +0000 UTC m=+235.181013284" observedRunningTime="2026-01-21 15:51:06.230675327 +0000 UTC m=+236.898444905" watchObservedRunningTime="2026-01-21 15:51:06.236354793 +0000 UTC m=+236.904124371" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.254583 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.260299 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fdddbbb-hms6q"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.271057 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.275476 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56d695b7b9-28gwl"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.977779 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.978824 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.978856 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979012 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979038 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979058 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979073 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979089 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979100 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979115 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979126 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979144 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979158 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979175 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979186 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979204 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerName="pruner" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979216 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerName="pruner" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979235 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979246 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="extract-utilities" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979262 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979273 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979286 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979297 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979313 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979344 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979363 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979374 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979388 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979400 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="extract-content" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.979422 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979433 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979604 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf940287-2e74-4026-87fa-33ff29056899" containerName="controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979628 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c156f4-f6be-46db-a27b-59da59600e26" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979642 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0be348-0efb-43ad-812e-da614a51704b" containerName="route-controller-manager" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979658 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ff8c4c-86ac-4abe-9dbc-69a277a3e34c" containerName="pruner" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979674 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae3b2cf-b59a-4ff2-801e-e6a6be3692dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979691 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73bc16d-d078-43de-a21d-f79b9529f2dc" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.979708 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5712fb-d149-4923-bd66-7ec385c7508d" containerName="registry-server" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980166 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980726 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980758 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980747 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980831 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.980732 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" gracePeriod=15 Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981073 4760 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981377 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981395 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981412 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981422 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981460 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981469 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981481 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981490 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981503 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981535 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981552 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981561 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:51:06 crc kubenswrapper[4760]: E0121 15:51:06.981574 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981583 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981750 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981789 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981822 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981834 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981873 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 15:51:06 crc kubenswrapper[4760]: I0121 15:51:06.981887 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140593 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140615 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140637 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140656 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140797 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.140823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.198818 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200008 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200918 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200945 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200953 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.200962 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" exitCode=2 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.201042 4760 scope.go:117] "RemoveContainer" containerID="fba979632c938557aed3a7a20aeb3abb6d2001ebf25045633b61463b1d670ca0" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.205407 4760 generic.go:334] "Generic (PLEG): container finished" podID="430b8562-5701-4889-bf8f-71ddef9325b0" containerID="538f4e14c5c439a69f565cbbfdf51a679d826b8180a462bb91225371feef8312" exitCode=0 Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.205529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerDied","Data":"538f4e14c5c439a69f565cbbfdf51a679d826b8180a462bb91225371feef8312"} Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.208048 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.208696 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242599 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242616 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242676 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242692 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242788 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.242944 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.347211 4760 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.347275 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.636594 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0be348-0efb-43ad-812e-da614a51704b" path="/var/lib/kubelet/pods/1a0be348-0efb-43ad-812e-da614a51704b/volumes" Jan 21 15:51:07 crc kubenswrapper[4760]: I0121 15:51:07.638549 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf940287-2e74-4026-87fa-33ff29056899" path="/var/lib/kubelet/pods/cf940287-2e74-4026-87fa-33ff29056899/volumes" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.214923 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.445026 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.445893 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.565705 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") pod \"430b8562-5701-4889-bf8f-71ddef9325b0\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") pod \"430b8562-5701-4889-bf8f-71ddef9325b0\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566203 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "430b8562-5701-4889-bf8f-71ddef9325b0" (UID: "430b8562-5701-4889-bf8f-71ddef9325b0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566378 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock" (OuterVolumeSpecName: "var-lock") pod "430b8562-5701-4889-bf8f-71ddef9325b0" (UID: "430b8562-5701-4889-bf8f-71ddef9325b0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566259 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") pod \"430b8562-5701-4889-bf8f-71ddef9325b0\" (UID: \"430b8562-5701-4889-bf8f-71ddef9325b0\") " Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566685 4760 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.566706 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/430b8562-5701-4889-bf8f-71ddef9325b0-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.578524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "430b8562-5701-4889-bf8f-71ddef9325b0" (UID: "430b8562-5701-4889-bf8f-71ddef9325b0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:08 crc kubenswrapper[4760]: I0121 15:51:08.667692 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/430b8562-5701-4889-bf8f-71ddef9325b0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.223813 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"430b8562-5701-4889-bf8f-71ddef9325b0","Type":"ContainerDied","Data":"11787bcccbc8568a38fbd1c4f817ef453e9c0f9b11aa51b3a63e6d984418f3b9"} Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.224135 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11787bcccbc8568a38fbd1c4f817ef453e9c0f9b11aa51b3a63e6d984418f3b9" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.224232 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.238428 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.633203 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.860451 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.861892 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.862643 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.863118 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.989627 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.989774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.989798 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.990041 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.990074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:09 crc kubenswrapper[4760]: I0121 15:51:09.990087 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.091721 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.092038 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.092157 4760 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.231041 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.232127 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" exitCode=0 Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.232208 4760 scope.go:117] "RemoveContainer" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.232392 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.252727 4760 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.253086 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.264705 4760 scope.go:117] "RemoveContainer" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.280688 4760 scope.go:117] "RemoveContainer" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.300960 4760 scope.go:117] "RemoveContainer" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.317401 4760 scope.go:117] "RemoveContainer" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.334284 4760 scope.go:117] "RemoveContainer" containerID="e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.358356 4760 scope.go:117] "RemoveContainer" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.359708 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\": container with ID starting with 19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6 not found: ID does not exist" containerID="19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.359748 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6"} err="failed to get container status \"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\": rpc error: code = NotFound desc = could not find container \"19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6\": container with ID starting with 19ebd9fc0d5b67ff5a34346ba8daf3dbe519e9f3b1bdc113a86e370d87e4dee6 not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.359786 4760 scope.go:117] "RemoveContainer" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.360435 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\": container with ID starting with 10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f not found: ID does not exist" containerID="10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.360468 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f"} err="failed to get container status \"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\": rpc error: code = NotFound desc = could not find container \"10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f\": container with ID starting with 10d303267571f1e3c336d6e35e146e223d5573f0c75d3d885388ae33a4af9e1f not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.360488 4760 scope.go:117] "RemoveContainer" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.360961 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\": container with ID starting with 85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b not found: ID does not exist" containerID="85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361061 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b"} err="failed to get container status \"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\": rpc error: code = NotFound desc = could not find container \"85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b\": container with ID starting with 85422c1c21558fc4cadcf3451123ab5af85be63e20fd6f4f0e50e78a08cb566b not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361097 4760 scope.go:117] "RemoveContainer" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.361483 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\": container with ID starting with 871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d not found: ID does not exist" containerID="871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361519 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d"} err="failed to get container status \"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\": rpc error: code = NotFound desc = could not find container \"871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d\": container with ID starting with 871bce6c72be58ef52b9ebb0e5f93adf4dc5bcb0874ae6a2b26b4e94d726002d not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361542 4760 scope.go:117] "RemoveContainer" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.361897 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\": container with ID starting with eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd not found: ID does not exist" containerID="eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361925 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd"} err="failed to get container status \"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\": rpc error: code = NotFound desc = could not find container \"eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd\": container with ID starting with eab4a9822e2d3bcdf5659be4f99f83658ac0ee5e2b463a5f3ec013ed09a6c6cd not found: ID does not exist" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.361941 4760 scope.go:117] "RemoveContainer" containerID="e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7" Jan 21 15:51:10 crc kubenswrapper[4760]: E0121 15:51:10.362182 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\": container with ID starting with e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7 not found: ID does not exist" containerID="e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7" Jan 21 15:51:10 crc kubenswrapper[4760]: I0121 15:51:10.362212 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7"} err="failed to get container status \"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\": rpc error: code = NotFound desc = could not find container \"e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7\": container with ID starting with e8d49727c9ab370d4cac05c874b1f3707659865c1a450d4df832bc8bf20fd2c7 not found: ID does not exist" Jan 21 15:51:11 crc kubenswrapper[4760]: I0121 15:51:11.629239 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.035435 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.035946 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.064906 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc9d56848395d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,LastTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.252603 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d0f2efde7195ec131858075c88639ba3b52a8cbacef94bb580631bd261361860"} Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.351290 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.351782 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.394966 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.395643 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: I0121 15:51:12.396054 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.454240 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:aad5e438ec868272540a84dfc53b266c8a08267bec7a7617871dddeb1511dcb2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd1e95af8b913ea8f010fa96cba36f2e7e5b1edfbf758c69b8c9eeb88c6911ea\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202744046},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c10fecd0ba9b4f4f77af571afe82506201ee1139d1904e61b94987e47659a271\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c44546b94a5203c84127195a969fe508a3c8e632c14d08b60a6cc3f15d19cc0d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1167523055},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.454894 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455107 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455265 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455466 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:12 crc kubenswrapper[4760]: E0121 15:51:12.455484 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.260670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb"} Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.262932 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.263191 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:13 crc kubenswrapper[4760]: E0121 15:51:13.263550 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.301770 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.302263 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:13 crc kubenswrapper[4760]: I0121 15:51:13.302544 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.266697 4760 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.607408 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.608024 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.608583 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.608865 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.609192 4760 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:14 crc kubenswrapper[4760]: I0121 15:51:14.609222 4760 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.609507 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="200ms" Jan 21 15:51:14 crc kubenswrapper[4760]: E0121 15:51:14.810840 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="400ms" Jan 21 15:51:15 crc kubenswrapper[4760]: E0121 15:51:15.211874 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="800ms" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.249219 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.249284 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.306699 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.307761 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.308496 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.308960 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342083 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342489 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342650 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:15 crc kubenswrapper[4760]: I0121 15:51:15.342976 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:16 crc kubenswrapper[4760]: E0121 15:51:16.013694 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="1.6s" Jan 21 15:51:16 crc kubenswrapper[4760]: E0121 15:51:16.680094 4760 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" volumeName="registry-storage" Jan 21 15:51:17 crc kubenswrapper[4760]: E0121 15:51:17.209117 4760 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.65:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cc9d56848395d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,LastTimestamp:2026-01-21 15:51:12.064543069 +0000 UTC m=+242.732312657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 15:51:17 crc kubenswrapper[4760]: E0121 15:51:17.614829 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="3.2s" Jan 21 15:51:19 crc kubenswrapper[4760]: I0121 15:51:19.626666 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:19 crc kubenswrapper[4760]: I0121 15:51:19.627378 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:19 crc kubenswrapper[4760]: I0121 15:51:19.628076 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.622343 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.623684 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.624421 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.624823 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.641828 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.641873 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:20 crc kubenswrapper[4760]: E0121 15:51:20.642360 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:20 crc kubenswrapper[4760]: I0121 15:51:20.642799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:20 crc kubenswrapper[4760]: W0121 15:51:20.674653 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4 WatchSource:0}: Error finding container cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4: Status 404 returned error can't find the container with id cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4 Jan 21 15:51:20 crc kubenswrapper[4760]: E0121 15:51:20.815978 4760 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" interval="6.4s" Jan 21 15:51:21 crc kubenswrapper[4760]: I0121 15:51:21.305339 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cafd6d94f13768b726347429eb6d1904a5576cb787fdea00ff9cd4758495feb4"} Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.315465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4604c312ef5effdd1bff60d9ec5f5c09adae7dee6e465f0ecbc036d252aff12c"} Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.315764 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.315794 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.316469 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.316529 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.317150 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: I0121 15:51:22.317487 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.844807 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T15:51:22Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:020b5bee2bbd09fbf64a1af808628bb76e9c70b9efdc49f38e5a50641590514c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:78f8ee56f09c047b3acd7e5b6b8a0f9534952f418b658c9f5a6d45d12546e67c\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1670570239},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:aad5e438ec868272540a84dfc53b266c8a08267bec7a7617871dddeb1511dcb2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd1e95af8b913ea8f010fa96cba36f2e7e5b1edfbf758c69b8c9eeb88c6911ea\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202744046},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:2b72e40c5d5b36b681f40c16ebf3dcac6520ed0c79f174ba87f673ab7afd209a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d83ee77ad07e06451a84205ac4c85c69e912a1c975e1a8a95095d79218028dce\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1178956511},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c10fecd0ba9b4f4f77af571afe82506201ee1139d1904e61b94987e47659a271\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:c44546b94a5203c84127195a969fe508a3c8e632c14d08b60a6cc3f15d19cc0d\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1167523055},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.846017 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.846546 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.847064 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.847488 4760 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:22 crc kubenswrapper[4760]: E0121 15:51:22.847518 4760 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.325636 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.325708 4760 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6" exitCode=1 Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.325793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6"} Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.326410 4760 scope.go:117] "RemoveContainer" containerID="147972f5e7520d3a32ca2e3ce302d86568db7a36bf12ff406bf464f08161f3e6" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.326644 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.326975 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.327475 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.327819 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328186 4760 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4604c312ef5effdd1bff60d9ec5f5c09adae7dee6e465f0ecbc036d252aff12c" exitCode=0 Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4604c312ef5effdd1bff60d9ec5f5c09adae7dee6e465f0ecbc036d252aff12c"} Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328618 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.328653 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329048 4760 status_manager.go:851] "Failed to get status for pod" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: E0121 15:51:23.329142 4760 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329477 4760 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329678 4760 status_manager.go:851] "Failed to get status for pod" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" pod="openshift-marketplace/redhat-operators-26f8s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-26f8s\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:23 crc kubenswrapper[4760]: I0121 15:51:23.329822 4760 status_manager.go:851] "Failed to get status for pod" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" pod="openshift-marketplace/community-operators-bcf5p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-bcf5p\": dial tcp 38.129.56.65:6443: connect: connection refused" Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.336040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3152c2e9bb935cc57c982b71535baef8e39b775a5064d9a2a82b92a2d23a0c76"} Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.341249 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.341408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a72a2ea61f9f87f8e4bbe1f28b00d2b25254486b16f55c43ba2293f29a38eddc"} Jan 21 15:51:24 crc kubenswrapper[4760]: I0121 15:51:24.814367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:25 crc kubenswrapper[4760]: I0121 15:51:25.350232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5c418e9aa95eaa49812bb7d7b6982f8a0d75209a0b969393cee0a5fbc4de0a6"} Jan 21 15:51:25 crc kubenswrapper[4760]: I0121 15:51:25.459624 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.365369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"85f126274e9e0b4da74890876167d85f4e8d3a6452091a5ffeaefc584682b84f"} Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.365740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"852480d743ab7e29fc881fd6b9bcc0db6360fb894950b304879d56d2e7e4069d"} Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.365757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5bddc85ffb777c91310f1c1ed427b33a7b121529a5ccc3534eab54dc8325c015"} Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.372465 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:26 crc kubenswrapper[4760]: I0121 15:51:26.372577 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.643491 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.645558 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.645706 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:30 crc kubenswrapper[4760]: I0121 15:51:30.649094 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:31 crc kubenswrapper[4760]: I0121 15:51:31.381094 4760 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.397223 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.397495 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.402183 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.407099 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="46e600f0-55e2-44af-ac80-f0bad89e8c05" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.971569 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:32 crc kubenswrapper[4760]: I0121 15:51:32.976168 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:33 crc kubenswrapper[4760]: I0121 15:51:33.402203 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:33 crc kubenswrapper[4760]: I0121 15:51:33.402238 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:35 crc kubenswrapper[4760]: I0121 15:51:35.463401 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 15:51:39 crc kubenswrapper[4760]: I0121 15:51:39.647061 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="46e600f0-55e2-44af-ac80-f0bad89e8c05" Jan 21 15:51:40 crc kubenswrapper[4760]: I0121 15:51:40.736821 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 15:51:40 crc kubenswrapper[4760]: I0121 15:51:40.981373 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.013732 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.414245 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.457487 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.799470 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.818434 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 15:51:41 crc kubenswrapper[4760]: I0121 15:51:41.914607 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 15:51:42 crc kubenswrapper[4760]: I0121 15:51:42.042787 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 15:51:42 crc kubenswrapper[4760]: I0121 15:51:42.295902 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 15:51:42 crc kubenswrapper[4760]: I0121 15:51:42.594783 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.152887 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.286841 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.414878 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.494498 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.579960 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.591245 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.600555 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.604084 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 15:51:43 crc kubenswrapper[4760]: I0121 15:51:43.928261 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.020415 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.041028 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.060766 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.088170 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.104213 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.132847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.245620 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.316369 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.366986 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.402129 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.451941 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.503626 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.653768 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.740717 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.808902 4760 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.837236 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 15:51:44 crc kubenswrapper[4760]: I0121 15:51:44.875136 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.009395 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.066087 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.079278 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.501930 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.510087 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.539825 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.547454 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.574260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.670964 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.696303 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.707358 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.708516 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.715113 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.773232 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.915874 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 15:51:45 crc kubenswrapper[4760]: I0121 15:51:45.985032 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.038015 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.116964 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.161593 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.208915 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.215905 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.223010 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.257906 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.460964 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.490444 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.610268 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.611270 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.633605 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.724647 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.885971 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 15:51:46 crc kubenswrapper[4760]: I0121 15:51:46.976087 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.052207 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.234246 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.537317 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.575648 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.590484 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.703991 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.925928 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 15:51:47 crc kubenswrapper[4760]: I0121 15:51:47.986249 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.228123 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.270396 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.278402 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.312847 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.562023 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.630729 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.686533 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.725764 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.746652 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.759031 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.781310 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.872905 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.877644 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 15:51:48 crc kubenswrapper[4760]: I0121 15:51:48.986587 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.004434 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.013700 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.068456 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.099881 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.139380 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.140750 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.154383 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.184207 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.217786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.358927 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.364627 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.368726 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.525489 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.738860 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.747981 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.757254 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.849951 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.894980 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.931392 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.931706 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:51:49 crc kubenswrapper[4760]: I0121 15:51:49.996189 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.025571 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.048139 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.059568 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.090376 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.092750 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.206866 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.226751 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.350260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.359028 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.412906 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.466294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.468294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.907054 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.944933 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.956960 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 15:51:50 crc kubenswrapper[4760]: I0121 15:51:50.992475 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.122042 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.140046 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.163046 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.171427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.179667 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.239778 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.373619 4760 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.377643 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.377699 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq","openshift-kube-apiserver/kube-apiserver-crc","openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:51:51 crc kubenswrapper[4760]: E0121 15:51:51.377900 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" containerName="installer" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.377920 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" containerName="installer" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378014 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="430b8562-5701-4889-bf8f-71ddef9325b0" containerName="installer" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378262 4760 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378303 4760 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1cda26f1-46aa-4bba-8048-c06c3ddec6b2" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.378712 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.379619 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385306 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385524 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385618 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385580 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385912 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.385988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386001 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386476 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386572 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386599 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386645 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.386728 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.387288 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.389687 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.398513 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.442770 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.442747651 podStartE2EDuration="20.442747651s" podCreationTimestamp="2026-01-21 15:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:51.440952874 +0000 UTC m=+282.108722442" watchObservedRunningTime="2026-01-21 15:51:51.442747651 +0000 UTC m=+282.110517249" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.444923 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.445102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.445244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.445274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.489722 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.546833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547190 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547305 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547463 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547841 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.547943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548705 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548774 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.548963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.549350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.549536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.554518 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.555615 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.567094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"route-controller-manager-857f44b5c5-2sxxq\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.568506 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"controller-manager-7986bbbbd5-zdfgt\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.612530 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.619691 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.659733 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.699072 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.711477 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.728458 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.734559 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.878752 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.890550 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.890839 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.901561 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.907484 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.930375 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.930386 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 15:51:51 crc kubenswrapper[4760]: I0121 15:51:51.967205 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.016758 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.029852 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.113610 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 15:51:52 crc kubenswrapper[4760]: I0121 15:51:52.193137 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.270374 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.342316 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.440920 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.510937 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.512141 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.598186 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.600036 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.620464 4760 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.798814 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.897869 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:52.966977 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.022742 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.091757 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.218340 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.331637 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.356635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.395608 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.488567 4760 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.518159 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.637185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.694837 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.918017 4760 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.918638 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" gracePeriod=5 Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.978634 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.992569 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 15:51:53 crc kubenswrapper[4760]: I0121 15:51:53.996995 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.122712 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.136049 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.138973 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.146263 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.248252 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.491852 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.509645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerStarted","Data":"da786444c2f4210ffc3340a00348fb41b5a3fa63effdf60c24b9b8746307313b"} Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.511142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerStarted","Data":"5c4b190a70a0fa9f5b16d16b7cbebfd054308c213cb3cff6571684e64e127555"} Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.561207 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.746941 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.909079 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 15:51:54 crc kubenswrapper[4760]: I0121 15:51:54.996475 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.008549 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.011896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.050143 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.110561 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.110844 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p6nql" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" containerID="cri-o://315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.126635 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.127700 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bcf5p" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" containerID="cri-o://784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.139810 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.140040 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" containerID="cri-o://a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.152852 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.153139 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6xd6s" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" containerID="cri-o://bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.158824 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.159090 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" containerID="cri-o://836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" gracePeriod=30 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.202935 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251153 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251549 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251928 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 15:51:55 crc kubenswrapper[4760]: E0121 15:51:55.251967 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-26f8s" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.253146 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.302373 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.499309 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.499776 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.535073 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.562118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.567356 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerID="315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.567500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.619617 4760 generic.go:334] "Generic (PLEG): container finished" podID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerID="bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.619691 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.621499 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerStarted","Data":"d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.621734 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.627758 4760 generic.go:334] "Generic (PLEG): container finished" podID="a34869a5-5ade-43ba-874a-487b308a13ca" containerID="a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.643872 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerDied","Data":"a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.647122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.647598 4760 generic.go:334] "Generic (PLEG): container finished" podID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.647669 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.654144 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerStarted","Data":"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.655090 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.664206 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.665254 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" podStartSLOduration=50.665236314 podStartE2EDuration="50.665236314s" podCreationTimestamp="2026-01-21 15:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:55.663271493 +0000 UTC m=+286.331041071" watchObservedRunningTime="2026-01-21 15:51:55.665236314 +0000 UTC m=+286.333005882" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.678800 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.680062 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerID="784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27" exitCode=0 Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.680168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27"} Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.725143 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" podStartSLOduration=50.72511693 podStartE2EDuration="50.72511693s" podCreationTimestamp="2026-01-21 15:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:51:55.711137335 +0000 UTC m=+286.378906913" watchObservedRunningTime="2026-01-21 15:51:55.72511693 +0000 UTC m=+286.392886508" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.770067 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.866811 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.902857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") pod \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.902997 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") pod \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.903057 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") pod \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\" (UID: \"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f\") " Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.904194 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities" (OuterVolumeSpecName: "utilities") pod "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" (UID: "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.910458 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr" (OuterVolumeSpecName: "kube-api-access-hbvgr") pod "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" (UID: "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f"). InnerVolumeSpecName "kube-api-access-hbvgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.959825 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.963379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" (UID: "ba544d41-3795-476a-ba4e-b9f4dcf8bb5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:55 crc kubenswrapper[4760]: I0121 15:51:55.980701 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.004180 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") pod \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.004803 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") pod \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.004888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") pod \"f08c19d6-0704-4562-8e0b-aa1d20161f70\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.007672 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz" (OuterVolumeSpecName: "kube-api-access-5s5tz") pod "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" (UID: "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac"). InnerVolumeSpecName "kube-api-access-5s5tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.015401 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") pod \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\" (UID: \"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.015530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") pod \"f08c19d6-0704-4562-8e0b-aa1d20161f70\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.015552 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") pod \"f08c19d6-0704-4562-8e0b-aa1d20161f70\" (UID: \"f08c19d6-0704-4562-8e0b-aa1d20161f70\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities" (OuterVolumeSpecName: "utilities") pod "f08c19d6-0704-4562-8e0b-aa1d20161f70" (UID: "f08c19d6-0704-4562-8e0b-aa1d20161f70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017827 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbvgr\" (UniqueName: \"kubernetes.io/projected/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-kube-api-access-hbvgr\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017870 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s5tz\" (UniqueName: \"kubernetes.io/projected/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-kube-api-access-5s5tz\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017889 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.017900 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.018814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities" (OuterVolumeSpecName: "utilities") pod "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" (UID: "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.020909 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx" (OuterVolumeSpecName: "kube-api-access-mgkpx") pod "f08c19d6-0704-4562-8e0b-aa1d20161f70" (UID: "f08c19d6-0704-4562-8e0b-aa1d20161f70"). InnerVolumeSpecName "kube-api-access-mgkpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.045219 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.054650 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" (UID: "0416bf01-ef39-4a1b-b8ca-8e02ea2882ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.080933 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.104600 4760 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.118975 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") pod \"ddcb6012-213a-4989-8cb3-60fc763a8255\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119035 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") pod \"ddcb6012-213a-4989-8cb3-60fc763a8255\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119086 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") pod \"a34869a5-5ade-43ba-874a-487b308a13ca\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119152 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") pod \"a34869a5-5ade-43ba-874a-487b308a13ca\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") pod \"ddcb6012-213a-4989-8cb3-60fc763a8255\" (UID: \"ddcb6012-213a-4989-8cb3-60fc763a8255\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") pod \"a34869a5-5ade-43ba-874a-487b308a13ca\" (UID: \"a34869a5-5ade-43ba-874a-487b308a13ca\") " Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119873 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119899 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119914 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgkpx\" (UniqueName: \"kubernetes.io/projected/f08c19d6-0704-4562-8e0b-aa1d20161f70-kube-api-access-mgkpx\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.119929 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.121079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities" (OuterVolumeSpecName: "utilities") pod "ddcb6012-213a-4989-8cb3-60fc763a8255" (UID: "ddcb6012-213a-4989-8cb3-60fc763a8255"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.121494 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a34869a5-5ade-43ba-874a-487b308a13ca" (UID: "a34869a5-5ade-43ba-874a-487b308a13ca"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.124189 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a34869a5-5ade-43ba-874a-487b308a13ca" (UID: "a34869a5-5ade-43ba-874a-487b308a13ca"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.124526 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf" (OuterVolumeSpecName: "kube-api-access-jgzbf") pod "a34869a5-5ade-43ba-874a-487b308a13ca" (UID: "a34869a5-5ade-43ba-874a-487b308a13ca"). InnerVolumeSpecName "kube-api-access-jgzbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.128406 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9" (OuterVolumeSpecName: "kube-api-access-ff6j9") pod "ddcb6012-213a-4989-8cb3-60fc763a8255" (UID: "ddcb6012-213a-4989-8cb3-60fc763a8255"). InnerVolumeSpecName "kube-api-access-ff6j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.155862 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f08c19d6-0704-4562-8e0b-aa1d20161f70" (UID: "f08c19d6-0704-4562-8e0b-aa1d20161f70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.194462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddcb6012-213a-4989-8cb3-60fc763a8255" (UID: "ddcb6012-213a-4989-8cb3-60fc763a8255"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221519 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221564 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221574 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddcb6012-213a-4989-8cb3-60fc763a8255-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221584 4760 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a34869a5-5ade-43ba-874a-487b308a13ca-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221594 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgzbf\" (UniqueName: \"kubernetes.io/projected/a34869a5-5ade-43ba-874a-487b308a13ca-kube-api-access-jgzbf\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221604 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff6j9\" (UniqueName: \"kubernetes.io/projected/ddcb6012-213a-4989-8cb3-60fc763a8255-kube-api-access-ff6j9\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.221613 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08c19d6-0704-4562-8e0b-aa1d20161f70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.534374 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.601233 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.687279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xd6s" event={"ID":"0416bf01-ef39-4a1b-b8ca-8e02ea2882ac","Type":"ContainerDied","Data":"ede1a4e7d84b3a77b2d09727692f417c217a123ba4dc60426746e1a06e51fa3b"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.687407 4760 scope.go:117] "RemoveContainer" containerID="bf7ccf1949d6df2f35ff187b38571da037f648e8e47ed74151685cbd4b8ac711" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.687416 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xd6s" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.689030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" event={"ID":"a34869a5-5ade-43ba-874a-487b308a13ca","Type":"ContainerDied","Data":"7c2120986455451a5fa0d8c01d9631a9c25536c6fd2ce07d9866b539400484eb"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.689053 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fz22j" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.697952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-26f8s" event={"ID":"f08c19d6-0704-4562-8e0b-aa1d20161f70","Type":"ContainerDied","Data":"541a7f871278d05ad698fda2df7aa406ca08b0a08158989a26312b95b2c447f8"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.698082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-26f8s" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.706299 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bcf5p" event={"ID":"ddcb6012-213a-4989-8cb3-60fc763a8255","Type":"ContainerDied","Data":"a7babcd6222774dab124948469e3fbae711626933b44ca524c6ab5d5470092df"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.706798 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bcf5p" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.715475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6nql" event={"ID":"ba544d41-3795-476a-ba4e-b9f4dcf8bb5f","Type":"ContainerDied","Data":"04845806ce311f8c329c8bcbddee515e27f30b40e982c655baf6a2792e30a7a8"} Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.715534 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6nql" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.740172 4760 scope.go:117] "RemoveContainer" containerID="a10aa316b83a3f9c2d25d428da60f4f8dc9a314c4b9b7112ace20ddbfd8e0575" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.745450 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.750749 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xd6s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.763909 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.771973 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bcf5p"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.776799 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.782797 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p6nql"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.785242 4760 scope.go:117] "RemoveContainer" containerID="e371a3a74457ac6d6003019cd4ef6160788cd3352bedd25a6047dad48c01aa1a" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.792447 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.798502 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-26f8s"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.813015 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.817915 4760 scope.go:117] "RemoveContainer" containerID="a84f68e02072935de857eb0ccce03cb61e9a06f782b0eb4db705cf4ab896ea16" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.818244 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fz22j"] Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.830139 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.833051 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.836779 4760 scope.go:117] "RemoveContainer" containerID="836e03a50693811df5a6285b5bdc76f53803b8117a12a1b09e2314098dbaa73e" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.850391 4760 scope.go:117] "RemoveContainer" containerID="34c0e91fa589c98af563e350825bf2916ca8107e682048309cf4cfb27dbe7ca9" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.874443 4760 scope.go:117] "RemoveContainer" containerID="a3bbfc5e6a85022bc527915cba1d4de9bfd61b5258e677882ba965ba0f9aec02" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.895664 4760 scope.go:117] "RemoveContainer" containerID="784ca42f86e042d6d1eeb3fc149e29341c05e0b1929b4851242097fb3a7b5a27" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.917059 4760 scope.go:117] "RemoveContainer" containerID="9168171ae827742ea642122d54e48757d6fe3f2ce307edbe293faba1ad8c6a19" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.938798 4760 scope.go:117] "RemoveContainer" containerID="77cf9d1328c6c7c43e38bfbe89cf385c04c30e3e3af785877e99bb17caac4c54" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.954024 4760 scope.go:117] "RemoveContainer" containerID="315fb15aa88b36985791a4c694e27c7695ebf21512be344f857b5e9950bfcef3" Jan 21 15:51:56 crc kubenswrapper[4760]: I0121 15:51:56.970737 4760 scope.go:117] "RemoveContainer" containerID="633996cf50a456325703c67cb22ee42dd93c0f4af97d123ece106067febb7014" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.000653 4760 scope.go:117] "RemoveContainer" containerID="afb748d1e3303e9c354838b8efea3a1db5673f0417c1ed47429a02ba7c78d173" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.632745 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" path="/var/lib/kubelet/pods/0416bf01-ef39-4a1b-b8ca-8e02ea2882ac/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.634167 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" path="/var/lib/kubelet/pods/a34869a5-5ade-43ba-874a-487b308a13ca/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.634758 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" path="/var/lib/kubelet/pods/ba544d41-3795-476a-ba4e-b9f4dcf8bb5f/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.636159 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" path="/var/lib/kubelet/pods/ddcb6012-213a-4989-8cb3-60fc763a8255/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.636890 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" path="/var/lib/kubelet/pods/f08c19d6-0704-4562-8e0b-aa1d20161f70/volumes" Jan 21 15:51:57 crc kubenswrapper[4760]: I0121 15:51:57.647818 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.503188 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.503627 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567609 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567736 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567791 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567783 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567852 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567863 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.567923 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568280 4760 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568307 4760 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568338 4760 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.568355 4760 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.576986 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.642569 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.669990 4760 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745559 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745624 4760 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" exitCode=137 Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745679 4760 scope.go:117] "RemoveContainer" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.745728 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.766135 4760 scope.go:117] "RemoveContainer" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" Jan 21 15:51:59 crc kubenswrapper[4760]: E0121 15:51:59.766636 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb\": container with ID starting with 307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb not found: ID does not exist" containerID="307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb" Jan 21 15:51:59 crc kubenswrapper[4760]: I0121 15:51:59.766685 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb"} err="failed to get container status \"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb\": rpc error: code = NotFound desc = could not find container \"307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb\": container with ID starting with 307d949b95010e7cacd75448aa4f8cbcd11eb755920f7c1769010d4f189cb6cb not found: ID does not exist" Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.448994 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.449938 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" containerID="cri-o://d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77" gracePeriod=30 Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.553438 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:52:05 crc kubenswrapper[4760]: I0121 15:52:05.553836 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" containerID="cri-o://007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" gracePeriod=30 Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.631998 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650843 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650891 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.650992 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") pod \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\" (UID: \"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081\") " Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.651524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.651914 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config" (OuterVolumeSpecName: "config") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.660971 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8" (OuterVolumeSpecName: "kube-api-access-2k5c8") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "kube-api-access-2k5c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.662216 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" (UID: "ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752575 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k5c8\" (UniqueName: \"kubernetes.io/projected/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-kube-api-access-2k5c8\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752642 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752652 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.752663 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.788627 4760 generic.go:334] "Generic (PLEG): container finished" podID="0652d45a-9dfb-4ada-bb28-39630775762e" containerID="d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77" exitCode=0 Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.788731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerDied","Data":"d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77"} Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790754 4760 generic.go:334] "Generic (PLEG): container finished" podID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" exitCode=0 Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790820 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790818 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerDied","Data":"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6"} Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790899 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq" event={"ID":"ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081","Type":"ContainerDied","Data":"da786444c2f4210ffc3340a00348fb41b5a3fa63effdf60c24b9b8746307313b"} Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.790924 4760 scope.go:117] "RemoveContainer" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.820417 4760 scope.go:117] "RemoveContainer" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" Jan 21 15:52:06 crc kubenswrapper[4760]: E0121 15:52:06.820933 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6\": container with ID starting with 007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6 not found: ID does not exist" containerID="007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.820972 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6"} err="failed to get container status \"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6\": rpc error: code = NotFound desc = could not find container \"007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6\": container with ID starting with 007c81dc3abd666272c61f1770db1ff8f917f705d344a0c1ebdb31214a6b50a6 not found: ID does not exist" Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.821012 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:52:06 crc kubenswrapper[4760]: I0121 15:52:06.823609 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-857f44b5c5-2sxxq"] Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.095700 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156817 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156942 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.156984 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.157054 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") pod \"0652d45a-9dfb-4ada-bb28-39630775762e\" (UID: \"0652d45a-9dfb-4ada-bb28-39630775762e\") " Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.157851 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca" (OuterVolumeSpecName: "client-ca") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.157860 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.158066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config" (OuterVolumeSpecName: "config") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.160992 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb" (OuterVolumeSpecName: "kube-api-access-bf4qb") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "kube-api-access-bf4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.161139 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0652d45a-9dfb-4ada-bb28-39630775762e" (UID: "0652d45a-9dfb-4ada-bb28-39630775762e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258342 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf4qb\" (UniqueName: \"kubernetes.io/projected/0652d45a-9dfb-4ada-bb28-39630775762e-kube-api-access-bf4qb\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258385 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258395 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258406 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0652d45a-9dfb-4ada-bb28-39630775762e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.258416 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0652d45a-9dfb-4ada-bb28-39630775762e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.628215 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" path="/var/lib/kubelet/pods/ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081/volumes" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.800965 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" event={"ID":"0652d45a-9dfb-4ada-bb28-39630775762e","Type":"ContainerDied","Data":"5c4b190a70a0fa9f5b16d16b7cbebfd054308c213cb3cff6571684e64e127555"} Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.801046 4760 scope.go:117] "RemoveContainer" containerID="d923effa292c21066d34bb621bd5cbaef8a075de019a05ca0cb1629e9376ee77" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.801070 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt" Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.824058 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:52:07 crc kubenswrapper[4760]: I0121 15:52:07.829175 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7986bbbbd5-zdfgt"] Jan 21 15:52:08 crc kubenswrapper[4760]: I0121 15:52:08.518778 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 15:52:08 crc kubenswrapper[4760]: I0121 15:52:08.882341 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 15:52:09 crc kubenswrapper[4760]: I0121 15:52:09.505395 4760 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 15:52:09 crc kubenswrapper[4760]: I0121 15:52:09.629581 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" path="/var/lib/kubelet/pods/0652d45a-9dfb-4ada-bb28-39630775762e/volumes" Jan 21 15:52:10 crc kubenswrapper[4760]: I0121 15:52:10.272519 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 15:52:12 crc kubenswrapper[4760]: I0121 15:52:12.901431 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 15:52:13 crc kubenswrapper[4760]: I0121 15:52:13.086012 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 15:52:13 crc kubenswrapper[4760]: I0121 15:52:13.647998 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 15:52:14 crc kubenswrapper[4760]: I0121 15:52:14.091485 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 15:52:16 crc kubenswrapper[4760]: I0121 15:52:16.046723 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 15:52:16 crc kubenswrapper[4760]: I0121 15:52:16.251925 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 15:52:16 crc kubenswrapper[4760]: I0121 15:52:16.488931 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 15:52:17 crc kubenswrapper[4760]: I0121 15:52:17.615679 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 15:52:17 crc kubenswrapper[4760]: I0121 15:52:17.703854 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 15:52:17 crc kubenswrapper[4760]: I0121 15:52:17.820559 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 15:52:18 crc kubenswrapper[4760]: I0121 15:52:18.201995 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 15:52:18 crc kubenswrapper[4760]: I0121 15:52:18.318267 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 15:52:19 crc kubenswrapper[4760]: I0121 15:52:19.222282 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 15:52:19 crc kubenswrapper[4760]: I0121 15:52:19.956226 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 15:52:20 crc kubenswrapper[4760]: I0121 15:52:20.028531 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 15:52:20 crc kubenswrapper[4760]: I0121 15:52:20.054147 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 15:52:21 crc kubenswrapper[4760]: I0121 15:52:21.505944 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 15:52:22 crc kubenswrapper[4760]: I0121 15:52:22.761118 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 15:52:23 crc kubenswrapper[4760]: I0121 15:52:23.442537 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 15:52:23 crc kubenswrapper[4760]: I0121 15:52:23.836980 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 15:52:23 crc kubenswrapper[4760]: I0121 15:52:23.993849 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 15:52:24 crc kubenswrapper[4760]: I0121 15:52:24.589120 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 15:52:25 crc kubenswrapper[4760]: I0121 15:52:25.159930 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613288 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613558 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613573 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613585 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613590 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613599 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613643 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613651 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613657 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613663 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613669 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613679 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613685 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613692 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613698 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613707 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613713 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613722 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613728 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613736 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613743 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613751 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613757 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="extract-content" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613766 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613772 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613779 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613785 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613794 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613799 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="extract-utilities" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613809 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613814 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: E0121 15:52:26.613823 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613829 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613912 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0652d45a-9dfb-4ada-bb28-39630775762e" containerName="controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613921 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddcb6012-213a-4989-8cb3-60fc763a8255" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613933 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34869a5-5ade-43ba-874a-487b308a13ca" containerName="marketplace-operator" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613941 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2815d7-c9a2-4be7-b56b-2b2d1f5b6081" containerName="route-controller-manager" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613953 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613961 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08c19d6-0704-4562-8e0b-aa1d20161f70" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613970 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba544d41-3795-476a-ba4e-b9f4dcf8bb5f" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.613981 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0416bf01-ef39-4a1b-b8ca-8e02ea2882ac" containerName="registry-server" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.614362 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.616680 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lhqrl"] Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.617245 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.617343 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.617817 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.618043 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619050 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619058 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619222 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619295 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.619541 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.620290 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.620790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.627844 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.630637 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.633275 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lhqrl"] Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.642145 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.711757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.711830 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.711908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzkr7\" (UniqueName: \"kubernetes.io/projected/a848eafc-6251-4b18-94fd-dddb46db86ca-kube-api-access-lzkr7\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712263 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.712478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814068 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814125 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814145 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzkr7\" (UniqueName: \"kubernetes.io/projected/a848eafc-6251-4b18-94fd-dddb46db86ca-kube-api-access-lzkr7\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.814189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.815368 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.815653 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.816625 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.820764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a848eafc-6251-4b18-94fd-dddb46db86ca-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.821304 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.831549 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzkr7\" (UniqueName: \"kubernetes.io/projected/a848eafc-6251-4b18-94fd-dddb46db86ca-kube-api-access-lzkr7\") pod \"marketplace-operator-79b997595-lhqrl\" (UID: \"a848eafc-6251-4b18-94fd-dddb46db86ca\") " pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.832421 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"route-controller-manager-85776c4794-sb6cx\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.937738 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:26 crc kubenswrapper[4760]: I0121 15:52:26.949236 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.134113 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:52:27 crc kubenswrapper[4760]: W0121 15:52:27.149153 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod540caecb_e017_41ad_b3ad_c8854e7e968d.slice/crio-6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa WatchSource:0}: Error finding container 6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa: Status 404 returned error can't find the container with id 6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.246059 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.486503 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lhqrl"] Jan 21 15:52:27 crc kubenswrapper[4760]: W0121 15:52:27.496017 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda848eafc_6251_4b18_94fd_dddb46db86ca.slice/crio-0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748 WatchSource:0}: Error finding container 0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748: Status 404 returned error can't find the container with id 0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748 Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.808345 4760 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.912027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" event={"ID":"a848eafc-6251-4b18-94fd-dddb46db86ca","Type":"ContainerStarted","Data":"2781d872d18b3118600dff9f6a5922e60f2fd0ffd8f2395cecfee6095bae1c9d"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.912081 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" event={"ID":"a848eafc-6251-4b18-94fd-dddb46db86ca","Type":"ContainerStarted","Data":"0b6af49e84435cad21a9408f3d456a23805d1a037a90a24621b68c5e34231748"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.912292 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913775 4760 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lhqrl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913820 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" podUID="a848eafc-6251-4b18-94fd-dddb46db86ca" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913926 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerStarted","Data":"26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.913955 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerStarted","Data":"6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa"} Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.914167 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.930648 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" podStartSLOduration=32.930620121 podStartE2EDuration="32.930620121s" podCreationTimestamp="2026-01-21 15:51:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:27.929491292 +0000 UTC m=+318.597260870" watchObservedRunningTime="2026-01-21 15:52:27.930620121 +0000 UTC m=+318.598389689" Jan 21 15:52:27 crc kubenswrapper[4760]: I0121 15:52:27.947198 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" podStartSLOduration=2.947175934 podStartE2EDuration="2.947175934s" podCreationTimestamp="2026-01-21 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:27.944872474 +0000 UTC m=+318.612642052" watchObservedRunningTime="2026-01-21 15:52:27.947175934 +0000 UTC m=+318.614945502" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.019694 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.305045 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.340852 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.341564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.344697 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.344771 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.345447 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.346606 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.347949 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.348278 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.363916 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.367616 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.460808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.460926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.460991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.461045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.461072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.533987 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562817 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.562923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.563062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.563595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.564654 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.564939 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.570925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.581163 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"controller-manager-6dfcd5c5b4-l77fp\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.667246 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.858348 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.922214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerStarted","Data":"a449d2b81302216638d6008030083b87b938be681ebb167f6a80436b4e6252c6"} Jan 21 15:52:28 crc kubenswrapper[4760]: I0121 15:52:28.926465 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lhqrl" Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.485695 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.865974 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.933169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerStarted","Data":"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3"} Jan 21 15:52:29 crc kubenswrapper[4760]: I0121 15:52:29.951369 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" podStartSLOduration=4.951347117 podStartE2EDuration="4.951347117s" podCreationTimestamp="2026-01-21 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:29.94956106 +0000 UTC m=+320.617330638" watchObservedRunningTime="2026-01-21 15:52:29.951347117 +0000 UTC m=+320.619116705" Jan 21 15:52:30 crc kubenswrapper[4760]: I0121 15:52:30.938059 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:30 crc kubenswrapper[4760]: I0121 15:52:30.942091 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:32 crc kubenswrapper[4760]: I0121 15:52:32.200105 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 15:52:34 crc kubenswrapper[4760]: I0121 15:52:34.217746 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 15:52:45 crc kubenswrapper[4760]: I0121 15:52:45.454530 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:45 crc kubenswrapper[4760]: I0121 15:52:45.454996 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" containerID="cri-o://6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" gracePeriod=30 Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.001199 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.010855 4760 generic.go:334] "Generic (PLEG): container finished" podID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" exitCode=0 Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.010919 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.010929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerDied","Data":"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3"} Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.011048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp" event={"ID":"2ff1b328-fc31-4f6c-af2a-7d7741749dc4","Type":"ContainerDied","Data":"a449d2b81302216638d6008030083b87b938be681ebb167f6a80436b4e6252c6"} Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.011087 4760 scope.go:117] "RemoveContainer" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.042297 4760 scope.go:117] "RemoveContainer" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" Jan 21 15:52:46 crc kubenswrapper[4760]: E0121 15:52:46.044259 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3\": container with ID starting with 6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3 not found: ID does not exist" containerID="6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.044375 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3"} err="failed to get container status \"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3\": rpc error: code = NotFound desc = could not find container \"6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3\": container with ID starting with 6fa1ce1e9a945cbfd615331d65a8f1ac64e6916c71d2157ee8cd5d0850bd90e3 not found: ID does not exist" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182508 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182933 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.183902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.183946 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.182961 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config" (OuterVolumeSpecName: "config") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184117 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") pod \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\" (UID: \"2ff1b328-fc31-4f6c-af2a-7d7741749dc4\") " Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184651 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184680 4760 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.184695 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.190429 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.190478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526" (OuterVolumeSpecName: "kube-api-access-p2526") pod "2ff1b328-fc31-4f6c-af2a-7d7741749dc4" (UID: "2ff1b328-fc31-4f6c-af2a-7d7741749dc4"). InnerVolumeSpecName "kube-api-access-p2526". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.286185 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.286230 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/2ff1b328-fc31-4f6c-af2a-7d7741749dc4-kube-api-access-p2526\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.340688 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:46 crc kubenswrapper[4760]: I0121 15:52:46.344513 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6dfcd5c5b4-l77fp"] Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.355530 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6989945b56-jbkp9"] Jan 21 15:52:47 crc kubenswrapper[4760]: E0121 15:52:47.355789 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.355804 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.355915 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" containerName="controller-manager" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.356410 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358654 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358710 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358795 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.358848 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.360808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.367630 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6989945b56-jbkp9"] Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.369832 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.377480 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.501854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4511f84-306a-473c-9408-c4e7fde4dbee-serving-cert\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.501984 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-client-ca\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.502016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5xj\" (UniqueName: \"kubernetes.io/projected/a4511f84-306a-473c-9408-c4e7fde4dbee-kube-api-access-2l5xj\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.502041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-config\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.502082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-proxy-ca-bundles\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603507 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-config\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603572 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-proxy-ca-bundles\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4511f84-306a-473c-9408-c4e7fde4dbee-serving-cert\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603690 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-client-ca\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.603748 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l5xj\" (UniqueName: \"kubernetes.io/projected/a4511f84-306a-473c-9408-c4e7fde4dbee-kube-api-access-2l5xj\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.605454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-client-ca\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.605695 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-proxy-ca-bundles\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.606853 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4511f84-306a-473c-9408-c4e7fde4dbee-config\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.611355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4511f84-306a-473c-9408-c4e7fde4dbee-serving-cert\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.628841 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff1b328-fc31-4f6c-af2a-7d7741749dc4" path="/var/lib/kubelet/pods/2ff1b328-fc31-4f6c-af2a-7d7741749dc4/volumes" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.629448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l5xj\" (UniqueName: \"kubernetes.io/projected/a4511f84-306a-473c-9408-c4e7fde4dbee-kube-api-access-2l5xj\") pod \"controller-manager-6989945b56-jbkp9\" (UID: \"a4511f84-306a-473c-9408-c4e7fde4dbee\") " pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:47 crc kubenswrapper[4760]: I0121 15:52:47.694136 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:48 crc kubenswrapper[4760]: I0121 15:52:48.094068 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6989945b56-jbkp9"] Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.032394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" event={"ID":"a4511f84-306a-473c-9408-c4e7fde4dbee","Type":"ContainerStarted","Data":"c29d0e98bdf24ffcb7e143dc47ecaf7520c940840e1944412ab0ac36583c79ad"} Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.032464 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" event={"ID":"a4511f84-306a-473c-9408-c4e7fde4dbee","Type":"ContainerStarted","Data":"42ed2fff1c1f8d4e98b2ace9950f0279ea6f4d62a45bc780bfc53d392540a434"} Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.032688 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.037454 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" Jan 21 15:52:49 crc kubenswrapper[4760]: I0121 15:52:49.075250 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6989945b56-jbkp9" podStartSLOduration=4.075226461 podStartE2EDuration="4.075226461s" podCreationTimestamp="2026-01-21 15:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:49.054210255 +0000 UTC m=+339.721979843" watchObservedRunningTime="2026-01-21 15:52:49.075226461 +0000 UTC m=+339.742996049" Jan 21 15:52:54 crc kubenswrapper[4760]: I0121 15:52:54.507752 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" containerID="cri-o://b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" gracePeriod=15 Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.048773 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071634 4760 generic.go:334] "Generic (PLEG): container finished" podID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" exitCode=0 Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071685 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerDied","Data":"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1"} Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071710 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" event={"ID":"802d2cd3-498b-4d87-880d-0f23a14c183f","Type":"ContainerDied","Data":"1e75b9ece68e658666e9933bb9705b3b459094ff3173c1971fa155503f65bca4"} Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071707 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p467d" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.071738 4760 scope.go:117] "RemoveContainer" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.077935 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl"] Jan 21 15:52:55 crc kubenswrapper[4760]: E0121 15:52:55.078144 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.078161 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.078248 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" containerName="oauth-openshift" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.078659 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.097413 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl"] Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.113205 4760 scope.go:117] "RemoveContainer" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" Jan 21 15:52:55 crc kubenswrapper[4760]: E0121 15:52:55.114987 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1\": container with ID starting with b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1 not found: ID does not exist" containerID="b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.115036 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1"} err="failed to get container status \"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1\": rpc error: code = NotFound desc = could not find container \"b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1\": container with ID starting with b6c2e618e87c07e302ec2de95b630de140c3ff47320b01201407090b65527bc1 not found: ID does not exist" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235256 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235315 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235373 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235415 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235455 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235495 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235541 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235576 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235671 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235700 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235756 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") pod \"802d2cd3-498b-4d87-880d-0f23a14c183f\" (UID: \"802d2cd3-498b-4d87-880d-0f23a14c183f\") " Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.235975 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236014 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6f9\" (UniqueName: \"kubernetes.io/projected/0b57afe9-4d09-42a9-a337-3847d8d836d4-kube-api-access-7c6f9\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236081 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-session\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236107 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236139 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236164 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236188 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-policies\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236234 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-dir\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236257 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236311 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236400 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236663 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.236979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.237202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.237675 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.244659 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.245001 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.246787 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.246832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv" (OuterVolumeSpecName: "kube-api-access-ntxhv") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "kube-api-access-ntxhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.247423 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.256516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.256845 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.257235 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.260984 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "802d2cd3-498b-4d87-880d-0f23a14c183f" (UID: "802d2cd3-498b-4d87-880d-0f23a14c183f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c6f9\" (UniqueName: \"kubernetes.io/projected/0b57afe9-4d09-42a9-a337-3847d8d836d4-kube-api-access-7c6f9\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-session\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337861 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338022 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-policies\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338097 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-dir\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338243 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338424 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338499 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338590 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338653 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338727 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338799 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338864 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.338949 4760 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339010 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339060 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-policies\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339071 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxhv\" (UniqueName: \"kubernetes.io/projected/802d2cd3-498b-4d87-880d-0f23a14c183f-kube-api-access-ntxhv\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339183 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339248 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339306 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339390 4760 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339455 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339520 4760 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d2cd3-498b-4d87-880d-0f23a14c183f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.337884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b57afe9-4d09-42a9-a337-3847d8d836d4-audit-dir\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.339782 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.340440 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.342370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.342634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.342694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.343472 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.343855 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-login\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.344524 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-session\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.345786 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.346019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b57afe9-4d09-42a9-a337-3847d8d836d4-v4-0-config-user-template-error\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.357547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c6f9\" (UniqueName: \"kubernetes.io/projected/0b57afe9-4d09-42a9-a337-3847d8d836d4-kube-api-access-7c6f9\") pod \"oauth-openshift-5cdc7c97b9-z4vdl\" (UID: \"0b57afe9-4d09-42a9-a337-3847d8d836d4\") " pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.403198 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.406841 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p467d"] Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.433667 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.631384 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802d2cd3-498b-4d87-880d-0f23a14c183f" path="/var/lib/kubelet/pods/802d2cd3-498b-4d87-880d-0f23a14c183f/volumes" Jan 21 15:52:55 crc kubenswrapper[4760]: I0121 15:52:55.865421 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl"] Jan 21 15:52:56 crc kubenswrapper[4760]: I0121 15:52:56.081611 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" event={"ID":"0b57afe9-4d09-42a9-a337-3847d8d836d4","Type":"ContainerStarted","Data":"8ad43cc00969d72c1b23b84fdda99fd9eab52fa3a68a52ac26ebed41dc6d8599"} Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.087997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" event={"ID":"0b57afe9-4d09-42a9-a337-3847d8d836d4","Type":"ContainerStarted","Data":"4ff3ebe3c6970191fd238d4883e00371865e1a35d53cf20b48657639acb5b9ca"} Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.088791 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.094366 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" Jan 21 15:52:57 crc kubenswrapper[4760]: I0121 15:52:57.111023 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5cdc7c97b9-z4vdl" podStartSLOduration=28.110989812 podStartE2EDuration="28.110989812s" podCreationTimestamp="2026-01-21 15:52:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:52:57.109131176 +0000 UTC m=+347.776900794" watchObservedRunningTime="2026-01-21 15:52:57.110989812 +0000 UTC m=+347.778759410" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.763473 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r9xz4"] Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.764885 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.767747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.779490 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9xz4"] Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.938616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-catalog-content\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.938681 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-utilities\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:02 crc kubenswrapper[4760]: I0121 15:53:02.938701 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55wqf\" (UniqueName: \"kubernetes.io/projected/3b7a88f7-910c-443d-8dbc-471879998d6a-kube-api-access-55wqf\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-catalog-content\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039368 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-utilities\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039389 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55wqf\" (UniqueName: \"kubernetes.io/projected/3b7a88f7-910c-443d-8dbc-471879998d6a-kube-api-access-55wqf\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-catalog-content\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.039991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b7a88f7-910c-443d-8dbc-471879998d6a-utilities\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.062217 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55wqf\" (UniqueName: \"kubernetes.io/projected/3b7a88f7-910c-443d-8dbc-471879998d6a-kube-api-access-55wqf\") pod \"certified-operators-r9xz4\" (UID: \"3b7a88f7-910c-443d-8dbc-471879998d6a\") " pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.081016 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.523831 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r9xz4"] Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.759539 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v4mmm"] Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.760831 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.763503 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.777372 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4mmm"] Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.849951 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlqv\" (UniqueName: \"kubernetes.io/projected/a74de4f6-26aa-473e-87a5-b4a2a30f0596-kube-api-access-mtlqv\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.850018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-catalog-content\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.850058 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-utilities\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.952249 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-catalog-content\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.952384 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-utilities\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.952491 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlqv\" (UniqueName: \"kubernetes.io/projected/a74de4f6-26aa-473e-87a5-b4a2a30f0596-kube-api-access-mtlqv\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.953090 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-utilities\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.953087 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a74de4f6-26aa-473e-87a5-b4a2a30f0596-catalog-content\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:03 crc kubenswrapper[4760]: I0121 15:53:03.977343 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlqv\" (UniqueName: \"kubernetes.io/projected/a74de4f6-26aa-473e-87a5-b4a2a30f0596-kube-api-access-mtlqv\") pod \"redhat-operators-v4mmm\" (UID: \"a74de4f6-26aa-473e-87a5-b4a2a30f0596\") " pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.120470 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.129505 4760 generic.go:334] "Generic (PLEG): container finished" podID="3b7a88f7-910c-443d-8dbc-471879998d6a" containerID="865e7bda4b83c41744a1cc1b74aa8c9455af1241382c103c27c51c79890381dd" exitCode=0 Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.129658 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerDied","Data":"865e7bda4b83c41744a1cc1b74aa8c9455af1241382c103c27c51c79890381dd"} Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.130591 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerStarted","Data":"a0a4d3ee8f7bd2e44bdd45b203b434f54188b13bc3c326d8e2fcf334eaa6f564"} Jan 21 15:53:04 crc kubenswrapper[4760]: I0121 15:53:04.560134 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v4mmm"] Jan 21 15:53:04 crc kubenswrapper[4760]: W0121 15:53:04.567641 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda74de4f6_26aa_473e_87a5_b4a2a30f0596.slice/crio-c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0 WatchSource:0}: Error finding container c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0: Status 404 returned error can't find the container with id c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0 Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.157836 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerStarted","Data":"bbf13dcb34016c3c250fcb7e65eb65568479d3f6c344be087c81c6e3b4ec4249"} Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.161087 4760 generic.go:334] "Generic (PLEG): container finished" podID="a74de4f6-26aa-473e-87a5-b4a2a30f0596" containerID="1ef272ad09aecc93bbbf796e4adad59977adad0a516b0eb450d25ba683466c86" exitCode=0 Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.161123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerDied","Data":"1ef272ad09aecc93bbbf796e4adad59977adad0a516b0eb450d25ba683466c86"} Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.161145 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerStarted","Data":"c2297301a888566220c9dd607d48ab76a93495b9356a6ea2be2b7f7c24584db0"} Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.162430 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f6w64"] Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.163383 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.165978 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.170203 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f6w64"] Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.274774 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-utilities\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.274861 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqv2\" (UniqueName: \"kubernetes.io/projected/eceda6b0-5176-4f10-83f7-2a652e48f206-kube-api-access-vkqv2\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.274922 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-catalog-content\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377287 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-catalog-content\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-utilities\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqv2\" (UniqueName: \"kubernetes.io/projected/eceda6b0-5176-4f10-83f7-2a652e48f206-kube-api-access-vkqv2\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-catalog-content\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.377976 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eceda6b0-5176-4f10-83f7-2a652e48f206-utilities\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.402539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqv2\" (UniqueName: \"kubernetes.io/projected/eceda6b0-5176-4f10-83f7-2a652e48f206-kube-api-access-vkqv2\") pod \"community-operators-f6w64\" (UID: \"eceda6b0-5176-4f10-83f7-2a652e48f206\") " pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.491403 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:05 crc kubenswrapper[4760]: I0121 15:53:05.907948 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f6w64"] Jan 21 15:53:05 crc kubenswrapper[4760]: W0121 15:53:05.917873 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeceda6b0_5176_4f10_83f7_2a652e48f206.slice/crio-3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8 WatchSource:0}: Error finding container 3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8: Status 404 returned error can't find the container with id 3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8 Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.168092 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dbfv2"] Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.171134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.174066 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.177599 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbfv2"] Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.191285 4760 generic.go:334] "Generic (PLEG): container finished" podID="eceda6b0-5176-4f10-83f7-2a652e48f206" containerID="61529627a77e59d1f8c818b86a16ebfc4bf633266c0b65793863c416b0f24368" exitCode=0 Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.191416 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerDied","Data":"61529627a77e59d1f8c818b86a16ebfc4bf633266c0b65793863c416b0f24368"} Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.191988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerStarted","Data":"3dc27ab746d9212304104b1faa97afe55798c39b6bb083bc8af9e4ab790143f8"} Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.210918 4760 generic.go:334] "Generic (PLEG): container finished" podID="3b7a88f7-910c-443d-8dbc-471879998d6a" containerID="bbf13dcb34016c3c250fcb7e65eb65568479d3f6c344be087c81c6e3b4ec4249" exitCode=0 Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.210997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerDied","Data":"bbf13dcb34016c3c250fcb7e65eb65568479d3f6c344be087c81c6e3b4ec4249"} Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.292845 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-catalog-content\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.293019 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8z7\" (UniqueName: \"kubernetes.io/projected/f0782378-6389-4c4d-b387-3d2860fb524f-kube-api-access-2p8z7\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.293042 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-utilities\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394007 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-catalog-content\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8z7\" (UniqueName: \"kubernetes.io/projected/f0782378-6389-4c4d-b387-3d2860fb524f-kube-api-access-2p8z7\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394552 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-utilities\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-catalog-content\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.394965 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0782378-6389-4c4d-b387-3d2860fb524f-utilities\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.412694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8z7\" (UniqueName: \"kubernetes.io/projected/f0782378-6389-4c4d-b387-3d2860fb524f-kube-api-access-2p8z7\") pod \"redhat-marketplace-dbfv2\" (UID: \"f0782378-6389-4c4d-b387-3d2860fb524f\") " pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.489985 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:06 crc kubenswrapper[4760]: I0121 15:53:06.907607 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dbfv2"] Jan 21 15:53:06 crc kubenswrapper[4760]: W0121 15:53:06.922382 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0782378_6389_4c4d_b387_3d2860fb524f.slice/crio-32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13 WatchSource:0}: Error finding container 32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13: Status 404 returned error can't find the container with id 32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13 Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.219376 4760 generic.go:334] "Generic (PLEG): container finished" podID="a74de4f6-26aa-473e-87a5-b4a2a30f0596" containerID="bf0ceb3fd1eb76b79a078434f82d5891fba23e31d38ae9ecf2bc02b7cd46b83f" exitCode=0 Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.219492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerDied","Data":"bf0ceb3fd1eb76b79a078434f82d5891fba23e31d38ae9ecf2bc02b7cd46b83f"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.225259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r9xz4" event={"ID":"3b7a88f7-910c-443d-8dbc-471879998d6a","Type":"ContainerStarted","Data":"ef446adc5a5ecfcfcd49eb5e9663d69540b024b909dcfa7b1e0158ebcb99adb2"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.227731 4760 generic.go:334] "Generic (PLEG): container finished" podID="f0782378-6389-4c4d-b387-3d2860fb524f" containerID="f2edfa804871775c3e81b612d11009776f48cd62dad9e5eb503fec8d46fe9fa5" exitCode=0 Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.227816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerDied","Data":"f2edfa804871775c3e81b612d11009776f48cd62dad9e5eb503fec8d46fe9fa5"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.227851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerStarted","Data":"32fe76999b15c3486d11c8619852bc7936c132bfa8ccfdc58d4c8a220f163b13"} Jan 21 15:53:07 crc kubenswrapper[4760]: I0121 15:53:07.281707 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r9xz4" podStartSLOduration=2.50164676 podStartE2EDuration="5.281677972s" podCreationTimestamp="2026-01-21 15:53:02 +0000 UTC" firstStartedPulling="2026-01-21 15:53:04.136534113 +0000 UTC m=+354.804303701" lastFinishedPulling="2026-01-21 15:53:06.916565305 +0000 UTC m=+357.584334913" observedRunningTime="2026-01-21 15:53:07.279387875 +0000 UTC m=+357.947157463" watchObservedRunningTime="2026-01-21 15:53:07.281677972 +0000 UTC m=+357.949447570" Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.235107 4760 generic.go:334] "Generic (PLEG): container finished" podID="f0782378-6389-4c4d-b387-3d2860fb524f" containerID="36323dda2db419bcfd6dc6366f3586d7d64de74a64e0ef9713061a4234c4cf31" exitCode=0 Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.235185 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerDied","Data":"36323dda2db419bcfd6dc6366f3586d7d64de74a64e0ef9713061a4234c4cf31"} Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.237418 4760 generic.go:334] "Generic (PLEG): container finished" podID="eceda6b0-5176-4f10-83f7-2a652e48f206" containerID="c21f3fe2bfe25c084f8a583ed8d653a9bf15374ee67eab7b2dc547ee02391092" exitCode=0 Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.237445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerDied","Data":"c21f3fe2bfe25c084f8a583ed8d653a9bf15374ee67eab7b2dc547ee02391092"} Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.243931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v4mmm" event={"ID":"a74de4f6-26aa-473e-87a5-b4a2a30f0596","Type":"ContainerStarted","Data":"4cfbdf5a19f39b65d87d03b648c81998df9b6d0f42a33baa5bbcaa900fa4a599"} Jan 21 15:53:08 crc kubenswrapper[4760]: I0121 15:53:08.274447 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v4mmm" podStartSLOduration=2.60081243 podStartE2EDuration="5.274424565s" podCreationTimestamp="2026-01-21 15:53:03 +0000 UTC" firstStartedPulling="2026-01-21 15:53:05.16258129 +0000 UTC m=+355.830350868" lastFinishedPulling="2026-01-21 15:53:07.836193425 +0000 UTC m=+358.503963003" observedRunningTime="2026-01-21 15:53:08.273457071 +0000 UTC m=+358.941226659" watchObservedRunningTime="2026-01-21 15:53:08.274424565 +0000 UTC m=+358.942194143" Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.256230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dbfv2" event={"ID":"f0782378-6389-4c4d-b387-3d2860fb524f","Type":"ContainerStarted","Data":"94e6409e27d522798f062809f6f6e5c62fad7354d615c8d89e10a584a3455616"} Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.265388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f6w64" event={"ID":"eceda6b0-5176-4f10-83f7-2a652e48f206","Type":"ContainerStarted","Data":"d782b6ce245e2dc21d1c82a686dd02d1b1a56f794627aafe9e86b4b761ffe694"} Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.280470 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dbfv2" podStartSLOduration=1.7381691799999999 podStartE2EDuration="3.280445891s" podCreationTimestamp="2026-01-21 15:53:06 +0000 UTC" firstStartedPulling="2026-01-21 15:53:07.229766622 +0000 UTC m=+357.897536200" lastFinishedPulling="2026-01-21 15:53:08.772043333 +0000 UTC m=+359.439812911" observedRunningTime="2026-01-21 15:53:09.277692622 +0000 UTC m=+359.945462210" watchObservedRunningTime="2026-01-21 15:53:09.280445891 +0000 UTC m=+359.948215469" Jan 21 15:53:09 crc kubenswrapper[4760]: I0121 15:53:09.303819 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f6w64" podStartSLOduration=1.7367349399999998 podStartE2EDuration="4.303795836s" podCreationTimestamp="2026-01-21 15:53:05 +0000 UTC" firstStartedPulling="2026-01-21 15:53:06.197658603 +0000 UTC m=+356.865428181" lastFinishedPulling="2026-01-21 15:53:08.764719499 +0000 UTC m=+359.432489077" observedRunningTime="2026-01-21 15:53:09.302350269 +0000 UTC m=+359.970119867" watchObservedRunningTime="2026-01-21 15:53:09.303795836 +0000 UTC m=+359.971565414" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.081544 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.083968 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.125256 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:13 crc kubenswrapper[4760]: I0121 15:53:13.323350 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r9xz4" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.121134 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.121865 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.168782 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:14 crc kubenswrapper[4760]: I0121 15:53:14.335859 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v4mmm" Jan 21 15:53:15 crc kubenswrapper[4760]: I0121 15:53:15.492448 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:15 crc kubenswrapper[4760]: I0121 15:53:15.492541 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:15 crc kubenswrapper[4760]: I0121 15:53:15.533291 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.349946 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f6w64" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.490778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.491101 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:16 crc kubenswrapper[4760]: I0121 15:53:16.528299 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:17 crc kubenswrapper[4760]: I0121 15:53:17.384255 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dbfv2" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.108029 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9847t"] Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.108837 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.126562 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9847t"] Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279614 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-registry-certificates\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279675 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-bound-sa-token\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7144d079-8a61-468a-8c01-020c2cb35304-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5fp\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-kube-api-access-bf5fp\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279854 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-registry-tls\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279879 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-trusted-ca\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.279919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7144d079-8a61-468a-8c01-020c2cb35304-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.304499 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-registry-certificates\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-bound-sa-token\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381308 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7144d079-8a61-468a-8c01-020c2cb35304-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381349 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5fp\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-kube-api-access-bf5fp\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381418 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-registry-tls\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-trusted-ca\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7144d079-8a61-468a-8c01-020c2cb35304-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.381958 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7144d079-8a61-468a-8c01-020c2cb35304-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.382624 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-registry-certificates\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.383637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7144d079-8a61-468a-8c01-020c2cb35304-trusted-ca\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.390438 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7144d079-8a61-468a-8c01-020c2cb35304-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.390501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-registry-tls\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.400306 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-bound-sa-token\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.401016 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5fp\" (UniqueName: \"kubernetes.io/projected/7144d079-8a61-468a-8c01-020c2cb35304-kube-api-access-bf5fp\") pod \"image-registry-66df7c8f76-9847t\" (UID: \"7144d079-8a61-468a-8c01-020c2cb35304\") " pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.426056 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.820132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9847t"] Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.946313 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:53:20 crc kubenswrapper[4760]: I0121 15:53:20.946478 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:53:21 crc kubenswrapper[4760]: I0121 15:53:21.328794 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" event={"ID":"7144d079-8a61-468a-8c01-020c2cb35304","Type":"ContainerStarted","Data":"4f5c659fb379220d2213db0890da8d9ccee6932cb6e9dac11b16033d8227b4df"} Jan 21 15:53:24 crc kubenswrapper[4760]: I0121 15:53:24.349928 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" event={"ID":"7144d079-8a61-468a-8c01-020c2cb35304","Type":"ContainerStarted","Data":"ce44c9dfaf6545285d671fef47d641f094659850059949d80739977598273f48"} Jan 21 15:53:24 crc kubenswrapper[4760]: I0121 15:53:24.350363 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:24 crc kubenswrapper[4760]: I0121 15:53:24.373786 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" podStartSLOduration=4.373754104 podStartE2EDuration="4.373754104s" podCreationTimestamp="2026-01-21 15:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:24.367773575 +0000 UTC m=+375.035543173" watchObservedRunningTime="2026-01-21 15:53:24.373754104 +0000 UTC m=+375.041523692" Jan 21 15:53:25 crc kubenswrapper[4760]: I0121 15:53:25.452263 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:53:25 crc kubenswrapper[4760]: I0121 15:53:25.452814 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" containerID="cri-o://26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73" gracePeriod=30 Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.369267 4760 generic.go:334] "Generic (PLEG): container finished" podID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerID="26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73" exitCode=0 Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.369485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerDied","Data":"26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73"} Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.447346 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596575 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596802 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.596874 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") pod \"540caecb-e017-41ad-b3ad-c8854e7e968d\" (UID: \"540caecb-e017-41ad-b3ad-c8854e7e968d\") " Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.598050 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config" (OuterVolumeSpecName: "config") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.598087 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca" (OuterVolumeSpecName: "client-ca") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.609562 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.611298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8" (OuterVolumeSpecName: "kube-api-access-lfrp8") pod "540caecb-e017-41ad-b3ad-c8854e7e968d" (UID: "540caecb-e017-41ad-b3ad-c8854e7e968d"). InnerVolumeSpecName "kube-api-access-lfrp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698910 4760 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698963 4760 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/540caecb-e017-41ad-b3ad-c8854e7e968d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698973 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrp8\" (UniqueName: \"kubernetes.io/projected/540caecb-e017-41ad-b3ad-c8854e7e968d-kube-api-access-lfrp8\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:26 crc kubenswrapper[4760]: I0121 15:53:26.698985 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/540caecb-e017-41ad-b3ad-c8854e7e968d-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.377437 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" event={"ID":"540caecb-e017-41ad-b3ad-c8854e7e968d","Type":"ContainerDied","Data":"6255ad333c5188efaf495de6995ebd0928c5b2105cfaadc131b1a2c8c929c1aa"} Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.377554 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.377758 4760 scope.go:117] "RemoveContainer" containerID="26c0c2ae14b91ab50936440a1422c51c59193f29195bcfa5d9c2065110254d73" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.380644 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw"] Jan 21 15:53:27 crc kubenswrapper[4760]: E0121 15:53:27.380937 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.380954 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.381055 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" containerName="route-controller-manager" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.381544 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.384615 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.384881 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.385064 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.385226 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.385450 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.386493 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.395133 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw"] Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.439977 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.443663 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85776c4794-sb6cx"] Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw757\" (UniqueName: \"kubernetes.io/projected/6c01ede9-5028-4575-8926-500779b722a7-kube-api-access-tw757\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-config\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511744 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c01ede9-5028-4575-8926-500779b722a7-serving-cert\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.511776 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-client-ca\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw757\" (UniqueName: \"kubernetes.io/projected/6c01ede9-5028-4575-8926-500779b722a7-kube-api-access-tw757\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613361 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-config\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613395 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c01ede9-5028-4575-8926-500779b722a7-serving-cert\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.613423 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-client-ca\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.614544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-client-ca\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.614713 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c01ede9-5028-4575-8926-500779b722a7-config\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.620411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c01ede9-5028-4575-8926-500779b722a7-serving-cert\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.632639 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw757\" (UniqueName: \"kubernetes.io/projected/6c01ede9-5028-4575-8926-500779b722a7-kube-api-access-tw757\") pod \"route-controller-manager-6657c4c57b-hr4lw\" (UID: \"6c01ede9-5028-4575-8926-500779b722a7\") " pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.633539 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="540caecb-e017-41ad-b3ad-c8854e7e968d" path="/var/lib/kubelet/pods/540caecb-e017-41ad-b3ad-c8854e7e968d/volumes" Jan 21 15:53:27 crc kubenswrapper[4760]: I0121 15:53:27.712025 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.133032 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw"] Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.385940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" event={"ID":"6c01ede9-5028-4575-8926-500779b722a7","Type":"ContainerStarted","Data":"7a5b287e55f380334dd1f158823d776d879d8bf47f68940810ab5fc369a34469"} Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.386617 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.386636 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" event={"ID":"6c01ede9-5028-4575-8926-500779b722a7","Type":"ContainerStarted","Data":"a4c92286b78e6ba071aebee8c3557cc1cf092bc21c6a0cab067c6b460b4bb4a7"} Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.734242 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" Jan 21 15:53:28 crc kubenswrapper[4760]: I0121 15:53:28.764605 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6657c4c57b-hr4lw" podStartSLOduration=3.764581223 podStartE2EDuration="3.764581223s" podCreationTimestamp="2026-01-21 15:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:53:28.407797905 +0000 UTC m=+379.075567503" watchObservedRunningTime="2026-01-21 15:53:28.764581223 +0000 UTC m=+379.432350801" Jan 21 15:53:40 crc kubenswrapper[4760]: I0121 15:53:40.432881 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9847t" Jan 21 15:53:40 crc kubenswrapper[4760]: I0121 15:53:40.490713 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:53:50 crc kubenswrapper[4760]: I0121 15:53:50.946650 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:53:50 crc kubenswrapper[4760]: I0121 15:53:50.947300 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.537265 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" containerID="cri-o://8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" gracePeriod=30 Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.945917 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964069 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964119 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964237 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964260 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964586 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964649 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.964710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") pod \"4d974904-dd7e-42df-8d49-3c5633b30767\" (UID: \"4d974904-dd7e-42df-8d49-3c5633b30767\") " Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.966040 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.973520 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn" (OuterVolumeSpecName: "kube-api-access-d6wnn") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "kube-api-access-d6wnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.974356 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.975312 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.982447 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.983753 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.985890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 15:54:05 crc kubenswrapper[4760]: I0121 15:54:05.990606 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4d974904-dd7e-42df-8d49-3c5633b30767" (UID: "4d974904-dd7e-42df-8d49-3c5633b30767"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066020 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066047 4760 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d974904-dd7e-42df-8d49-3c5633b30767-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066058 4760 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066070 4760 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d974904-dd7e-42df-8d49-3c5633b30767-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066079 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6wnn\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-kube-api-access-d6wnn\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066087 4760 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d974904-dd7e-42df-8d49-3c5633b30767-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.066095 4760 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d974904-dd7e-42df-8d49-3c5633b30767-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628554 4760 generic.go:334] "Generic (PLEG): container finished" podID="4d974904-dd7e-42df-8d49-3c5633b30767" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" exitCode=0 Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerDied","Data":"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859"} Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628640 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628682 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-l6q9j" event={"ID":"4d974904-dd7e-42df-8d49-3c5633b30767","Type":"ContainerDied","Data":"5ac024438f82f52729980b350537456191be564162967e4cf350c96d6db2cc6d"} Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.628707 4760 scope.go:117] "RemoveContainer" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.655689 4760 scope.go:117] "RemoveContainer" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" Jan 21 15:54:06 crc kubenswrapper[4760]: E0121 15:54:06.656211 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859\": container with ID starting with 8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859 not found: ID does not exist" containerID="8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.656252 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859"} err="failed to get container status \"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859\": rpc error: code = NotFound desc = could not find container \"8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859\": container with ID starting with 8f48d53014c54a6b5dbadff00a82064b7521b5a86e24ffcac9c366f3ec028859 not found: ID does not exist" Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.664006 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:54:06 crc kubenswrapper[4760]: I0121 15:54:06.666254 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-l6q9j"] Jan 21 15:54:07 crc kubenswrapper[4760]: I0121 15:54:07.633941 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" path="/var/lib/kubelet/pods/4d974904-dd7e-42df-8d49-3c5633b30767/volumes" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.946378 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947157 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947211 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947913 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:54:20 crc kubenswrapper[4760]: I0121 15:54:20.947972 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1" gracePeriod=600 Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.726600 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1" exitCode=0 Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.726837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1"} Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.726990 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a"} Jan 21 15:54:21 crc kubenswrapper[4760]: I0121 15:54:21.727018 4760 scope.go:117] "RemoveContainer" containerID="c4b0860c42e9db374715a149de88bf244ba92b6c1c6ff9ab364c384321546b99" Jan 21 15:56:50 crc kubenswrapper[4760]: I0121 15:56:50.946438 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:56:50 crc kubenswrapper[4760]: I0121 15:56:50.947496 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:20 crc kubenswrapper[4760]: I0121 15:57:20.946216 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:57:20 crc kubenswrapper[4760]: I0121 15:57:20.947446 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.585738 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r7btf"] Jan 21 15:57:23 crc kubenswrapper[4760]: E0121 15:57:23.590612 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.590667 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.591006 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d974904-dd7e-42df-8d49-3c5633b30767" containerName="registry" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.591751 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.602896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.605939 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.607150 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-78jhn" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.620405 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r7btf"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.642407 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-xn52x"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.643191 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xn52x"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.643215 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rhdtg"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.643748 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.644141 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.646262 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pkj9t" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.646483 4760 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cm7gj" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.651699 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rhdtg"] Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.703625 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrpq4\" (UniqueName: \"kubernetes.io/projected/f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4-kube-api-access-zrpq4\") pod \"cert-manager-cainjector-cf98fcc89-r7btf\" (UID: \"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.703693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4kqz\" (UniqueName: \"kubernetes.io/projected/9c2bdefb-6d75-4da7-89bb-160ec8b900da-kube-api-access-x4kqz\") pod \"cert-manager-webhook-687f57d79b-rhdtg\" (UID: \"9c2bdefb-6d75-4da7-89bb-160ec8b900da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.703749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlz4z\" (UniqueName: \"kubernetes.io/projected/2291564d-b6d1-4334-86b3-a41d012c6827-kube-api-access-zlz4z\") pod \"cert-manager-858654f9db-xn52x\" (UID: \"2291564d-b6d1-4334-86b3-a41d012c6827\") " pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.805629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlz4z\" (UniqueName: \"kubernetes.io/projected/2291564d-b6d1-4334-86b3-a41d012c6827-kube-api-access-zlz4z\") pod \"cert-manager-858654f9db-xn52x\" (UID: \"2291564d-b6d1-4334-86b3-a41d012c6827\") " pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.805856 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrpq4\" (UniqueName: \"kubernetes.io/projected/f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4-kube-api-access-zrpq4\") pod \"cert-manager-cainjector-cf98fcc89-r7btf\" (UID: \"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.805914 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4kqz\" (UniqueName: \"kubernetes.io/projected/9c2bdefb-6d75-4da7-89bb-160ec8b900da-kube-api-access-x4kqz\") pod \"cert-manager-webhook-687f57d79b-rhdtg\" (UID: \"9c2bdefb-6d75-4da7-89bb-160ec8b900da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.829281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4kqz\" (UniqueName: \"kubernetes.io/projected/9c2bdefb-6d75-4da7-89bb-160ec8b900da-kube-api-access-x4kqz\") pod \"cert-manager-webhook-687f57d79b-rhdtg\" (UID: \"9c2bdefb-6d75-4da7-89bb-160ec8b900da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.829503 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrpq4\" (UniqueName: \"kubernetes.io/projected/f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4-kube-api-access-zrpq4\") pod \"cert-manager-cainjector-cf98fcc89-r7btf\" (UID: \"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.831061 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlz4z\" (UniqueName: \"kubernetes.io/projected/2291564d-b6d1-4334-86b3-a41d012c6827-kube-api-access-zlz4z\") pod \"cert-manager-858654f9db-xn52x\" (UID: \"2291564d-b6d1-4334-86b3-a41d012c6827\") " pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:23 crc kubenswrapper[4760]: I0121 15:57:23.921797 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.024097 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.033177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-xn52x" Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.177591 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-r7btf"] Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.211831 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.274297 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-xn52x"] Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.314282 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rhdtg"] Jan 21 15:57:24 crc kubenswrapper[4760]: W0121 15:57:24.318118 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c2bdefb_6d75_4da7_89bb_160ec8b900da.slice/crio-3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522 WatchSource:0}: Error finding container 3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522: Status 404 returned error can't find the container with id 3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522 Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.858940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xn52x" event={"ID":"2291564d-b6d1-4334-86b3-a41d012c6827","Type":"ContainerStarted","Data":"cb2cae5d8f9307e5755617eae5162829f4503a74ea429bbb5c525ac48f13a366"} Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.860549 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" event={"ID":"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4","Type":"ContainerStarted","Data":"f91226e04158a4950e9cd6e4bfeb317bce45350212c70799a0f9e8393e412408"} Jan 21 15:57:24 crc kubenswrapper[4760]: I0121 15:57:24.861764 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" event={"ID":"9c2bdefb-6d75-4da7-89bb-160ec8b900da","Type":"ContainerStarted","Data":"3ebf7ebf0fd9aba3a9f79cf62b9ae36243cbb60bcc44896bf922339c112dc522"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.901391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-xn52x" event={"ID":"2291564d-b6d1-4334-86b3-a41d012c6827","Type":"ContainerStarted","Data":"f1fd1377ee6d050608359fdcf7f936b19965b9dfc0e68f33cd0d35e643b93fc4"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.904791 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" event={"ID":"f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4","Type":"ContainerStarted","Data":"80202b4bbe5b828bb04c2e2433973a40aa118b5650b7897ea671a1046540f6ba"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.907573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" event={"ID":"9c2bdefb-6d75-4da7-89bb-160ec8b900da","Type":"ContainerStarted","Data":"3461b98b139ee31db92cc28f68f0639673d43cd5f488ee86384278dfd8a5da66"} Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.908063 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.926948 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-xn52x" podStartSLOduration=1.626972108 podStartE2EDuration="7.926924282s" podCreationTimestamp="2026-01-21 15:57:23 +0000 UTC" firstStartedPulling="2026-01-21 15:57:24.291056437 +0000 UTC m=+614.958826015" lastFinishedPulling="2026-01-21 15:57:30.591008611 +0000 UTC m=+621.258778189" observedRunningTime="2026-01-21 15:57:30.925313383 +0000 UTC m=+621.593082961" watchObservedRunningTime="2026-01-21 15:57:30.926924282 +0000 UTC m=+621.594693860" Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.947880 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" podStartSLOduration=1.68332471 podStartE2EDuration="7.947858001s" podCreationTimestamp="2026-01-21 15:57:23 +0000 UTC" firstStartedPulling="2026-01-21 15:57:24.320082285 +0000 UTC m=+614.987851863" lastFinishedPulling="2026-01-21 15:57:30.584615576 +0000 UTC m=+621.252385154" observedRunningTime="2026-01-21 15:57:30.94738106 +0000 UTC m=+621.615150688" watchObservedRunningTime="2026-01-21 15:57:30.947858001 +0000 UTC m=+621.615627579" Jan 21 15:57:30 crc kubenswrapper[4760]: I0121 15:57:30.989967 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-r7btf" podStartSLOduration=1.62293214 podStartE2EDuration="7.989943485s" podCreationTimestamp="2026-01-21 15:57:23 +0000 UTC" firstStartedPulling="2026-01-21 15:57:24.209951455 +0000 UTC m=+614.877721033" lastFinishedPulling="2026-01-21 15:57:30.5769628 +0000 UTC m=+621.244732378" observedRunningTime="2026-01-21 15:57:30.988562761 +0000 UTC m=+621.656332339" watchObservedRunningTime="2026-01-21 15:57:30.989943485 +0000 UTC m=+621.657713053" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.296498 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297532 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" containerID="cri-o://d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297581 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" containerID="cri-o://cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297724 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" containerID="cri-o://6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297785 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297840 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" containerID="cri-o://93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.297894 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" containerID="cri-o://5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.298097 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" containerID="cri-o://521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.334507 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" containerID="cri-o://80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" gracePeriod=30 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.659000 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.662700 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-acl-logging/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.664195 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-controller/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.665205 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.726017 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d7x4z"] Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.726501 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.726766 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.726836 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.726895 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.726997 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.727574 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.727655 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.727743 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.727810 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.727869 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.727942 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728011 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728078 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728169 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728240 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728305 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728423 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kubecfg-setup" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728500 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kubecfg-setup" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728577 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728642 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.728704 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728770 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.728967 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="sbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729043 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729107 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729170 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729235 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729311 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729409 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729474 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-node" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729533 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovn-acl-logging" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729603 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="nbdb" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729664 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.729732 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="northd" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.729926 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.730002 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.730071 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.730132 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerName="ovnkube-controller" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.732407 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783471 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783561 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783655 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket" (OuterVolumeSpecName: "log-socket") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783695 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783734 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783766 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783794 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783846 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.783966 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784003 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784049 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784156 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784189 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784094 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784185 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784229 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784273 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") pod \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\" (UID: \"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49\") " Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784285 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784315 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784381 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash" (OuterVolumeSpecName: "host-slash") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784410 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log" (OuterVolumeSpecName: "node-log") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-bin\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-systemd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784683 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-env-overrides\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784716 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784765 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784786 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-log-socket\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784843 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.784919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovn-node-metrics-cert\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-kubelet\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785101 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-etc-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785128 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-var-lib-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785162 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kw88\" (UniqueName: \"kubernetes.io/projected/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-kube-api-access-8kw88\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785187 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-netns\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-netd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-slash\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-config\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-node-log\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-ovn\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785375 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-script-lib\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785397 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-systemd-units\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785477 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785576 4760 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785593 4760 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785607 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785620 4760 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785632 4760 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785648 4760 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785662 4760 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785683 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785696 4760 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785708 4760 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785721 4760 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785733 4760 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785744 4760 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785756 4760 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.785767 4760 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.786053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.790492 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.790828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h" (OuterVolumeSpecName: "kube-api-access-7kv9h") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "kube-api-access-7kv9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.799354 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" (UID: "aa19ef03-9cda-4ae5-b47c-4a3bac73dc49"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-bin\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-systemd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-env-overrides\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886834 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886869 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-bin\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886900 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-log-socket\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886877 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-log-socket\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovn-node-metrics-cert\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886927 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-systemd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886947 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-kubelet\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.886972 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-kubelet\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-etc-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-var-lib-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kw88\" (UniqueName: \"kubernetes.io/projected/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-kube-api-access-8kw88\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887114 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-netns\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-etc-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887138 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-netd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887159 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-slash\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-config\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887196 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-node-log\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887217 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-ovn\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-script-lib\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887253 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-systemd-units\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887425 4760 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887439 4760 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-env-overrides\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887452 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kv9h\" (UniqueName: \"kubernetes.io/projected/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-kube-api-access-7kv9h\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887464 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887480 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-cni-netd\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-ovn-kubernetes\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-run-netns\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887514 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-run-ovn\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887535 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-var-lib-openvswitch\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887542 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-host-slash\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887557 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-node-log\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.887576 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-systemd-units\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.888002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-config\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.888193 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovnkube-script-lib\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.891538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-ovn-node-metrics-cert\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.903370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kw88\" (UniqueName: \"kubernetes.io/projected/42ddd7c5-3c8b-47e2-99df-1b4fc11fa349-kube-api-access-8kw88\") pod \"ovnkube-node-d7x4z\" (UID: \"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349\") " pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.927311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovnkube-controller/3.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.929531 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-acl-logging/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930020 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-gfprm_aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/ovn-controller/0.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930353 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930380 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930388 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930397 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930406 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930394 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930413 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" exitCode=0 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930459 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930480 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930480 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" exitCode=143 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930499 4760 generic.go:334] "Generic (PLEG): container finished" podID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" exitCode=143 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930601 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930621 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930631 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930641 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930661 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930671 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930676 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930682 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930688 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930694 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930699 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930706 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930715 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930740 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930749 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930755 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930761 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930766 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930773 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930780 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930787 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930794 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930801 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930813 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930827 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930835 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930843 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930851 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930857 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930863 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930870 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930876 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930882 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930888 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gfprm" event={"ID":"aa19ef03-9cda-4ae5-b47c-4a3bac73dc49","Type":"ContainerDied","Data":"b5fa27d025848e094ee9fbae80d0d1dc50a2e3a8dd42089183368ae4f1396adf"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930907 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930917 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930924 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930930 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930937 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930943 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930949 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930955 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930961 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.930968 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932259 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932919 4760 generic.go:334] "Generic (PLEG): container finished" podID="7300c51f-415f-4696-bda1-a9e79ae5704a" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" exitCode=2 Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932944 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerDied","Data":"d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.932972 4760 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5"} Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.933293 4760 scope.go:117] "RemoveContainer" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" Jan 21 15:57:33 crc kubenswrapper[4760]: E0121 15:57:33.933486 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dx99k_openshift-multus(7300c51f-415f-4696-bda1-a9e79ae5704a)\"" pod="openshift-multus/multus-dx99k" podUID="7300c51f-415f-4696-bda1-a9e79ae5704a" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.946752 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.966948 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:33 crc kubenswrapper[4760]: I0121 15:57:33.993749 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.008742 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.014238 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gfprm"] Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.019316 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.033538 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.047301 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.058262 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.074281 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: W0121 15:57:34.082355 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ddd7c5_3c8b_47e2_99df_1b4fc11fa349.slice/crio-630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536 WatchSource:0}: Error finding container 630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536: Status 404 returned error can't find the container with id 630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536 Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.088471 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.107596 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.124261 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.125045 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125133 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125196 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.125848 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125881 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.125902 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.126216 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.126247 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.126273 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.127429 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127475 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127511 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.127805 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127826 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.127842 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.128406 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.128443 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.128461 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.130029 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130060 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130076 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.130604 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130633 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.130651 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.131168 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131202 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131226 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: E0121 15:57:34.131681 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131722 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.131748 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.133060 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.133139 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137258 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137297 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137817 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.137852 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138307 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138359 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138682 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.138711 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139108 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139131 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139488 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139536 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139860 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.139888 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140438 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140469 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140795 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.140824 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141220 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141264 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141616 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141644 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141922 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.141945 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142197 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142221 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142694 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.142733 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143033 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143061 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143351 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143380 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143643 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143668 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.143995 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144016 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144363 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144387 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144771 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.144802 4760 scope.go:117] "RemoveContainer" containerID="941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145173 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c"} err="failed to get container status \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": rpc error: code = NotFound desc = could not find container \"941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c\": container with ID starting with 941e01bcd3d3f81b5d7f66595d4021e57541cb6050c9eddddc71caae60a01f9c not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145219 4760 scope.go:117] "RemoveContainer" containerID="521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145695 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e"} err="failed to get container status \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": rpc error: code = NotFound desc = could not find container \"521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e\": container with ID starting with 521105e19497a7eb820902c53ce0d598d49bc889163081a35342e62b3d584a6e not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.145735 4760 scope.go:117] "RemoveContainer" containerID="cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146082 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03"} err="failed to get container status \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": rpc error: code = NotFound desc = could not find container \"cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03\": container with ID starting with cf9a000108e1c56a0ad6105e53e137eae2e208a60a6ba6143539e691175e3a03 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146122 4760 scope.go:117] "RemoveContainer" containerID="6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146476 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287"} err="failed to get container status \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": rpc error: code = NotFound desc = could not find container \"6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287\": container with ID starting with 6867b89d444b1d2c6743d79f4dcb9068201c4a58af597bca1ba94f31ac749287 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.146560 4760 scope.go:117] "RemoveContainer" containerID="7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147077 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a"} err="failed to get container status \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": rpc error: code = NotFound desc = could not find container \"7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a\": container with ID starting with 7153de5589f29a16396d35f82b37d4ba49a9d6305621f3cc914eab1bdfb1707a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147107 4760 scope.go:117] "RemoveContainer" containerID="93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147448 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6"} err="failed to get container status \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": rpc error: code = NotFound desc = could not find container \"93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6\": container with ID starting with 93757bc05679d7ae756acb7e00cf794c7276b9004135486fac760aa55d5964e6 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147474 4760 scope.go:117] "RemoveContainer" containerID="5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.147949 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91"} err="failed to get container status \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": rpc error: code = NotFound desc = could not find container \"5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91\": container with ID starting with 5f47b7433379e318ca0a3b44af68b9dbab60eadfb742088f2868b1096d147d91 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.148276 4760 scope.go:117] "RemoveContainer" containerID="d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.148648 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1"} err="failed to get container status \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": rpc error: code = NotFound desc = could not find container \"d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1\": container with ID starting with d7665c4c259dd3013dc862634ff28a5405e488266e5a9a0b72e91b8dea702be1 not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.148684 4760 scope.go:117] "RemoveContainer" containerID="6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.149189 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a"} err="failed to get container status \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": rpc error: code = NotFound desc = could not find container \"6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a\": container with ID starting with 6f1b0ce2af227534c5261feba06a1315ac26b75bcbcab2406703b4779d8ee80a not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.149243 4760 scope.go:117] "RemoveContainer" containerID="80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.149630 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff"} err="failed to get container status \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": rpc error: code = NotFound desc = could not find container \"80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff\": container with ID starting with 80f589feaf953a0fe2c3df72def4482729786207dafcc0dbcc5cf1240ec79aff not found: ID does not exist" Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.940043 4760 generic.go:334] "Generic (PLEG): container finished" podID="42ddd7c5-3c8b-47e2-99df-1b4fc11fa349" containerID="7ef0a5d31796470a1c0ad2f2ca09a5eb05670eeebee1215e1748b5659c6666a8" exitCode=0 Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.940176 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerDied","Data":"7ef0a5d31796470a1c0ad2f2ca09a5eb05670eeebee1215e1748b5659c6666a8"} Jan 21 15:57:34 crc kubenswrapper[4760]: I0121 15:57:34.940727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"630460f8a3cecfc45ba5062fb0eb17967541ee5848633b2bbe6db9e792577536"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.630994 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa19ef03-9cda-4ae5-b47c-4a3bac73dc49" path="/var/lib/kubelet/pods/aa19ef03-9cda-4ae5-b47c-4a3bac73dc49/volumes" Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"21f6af6d7eff729e1ec10e83e5211a847558527500036ba94db466433d3b6fc6"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"a8898a5d0b5bc5783dbbe7edc14eb662cc37736c4f110114c2d5f6b94479da42"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"4a0e301336a363a2baa069c510c278d68238914f9bada1dc1d11c414d19eeb39"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952919 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"6f09aba28a3900673d5c385766d6045f4a811c52ba57ecb765b57e0ce4d3b09e"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"f2509785598bc0c6f48a30914b41137b2c858fd980fe3fbf6edf75e737eed13e"} Jan 21 15:57:35 crc kubenswrapper[4760]: I0121 15:57:35.952941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"a1a0a304e4cf31170373f0c23322faf61319bc3ddcbd3cfe0e1e27a967b8cb7f"} Jan 21 15:57:37 crc kubenswrapper[4760]: I0121 15:57:37.968545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"8d90259ffcabb3b8ffef503e63aaa00aed51ab1669180d230d0edd0d6c8c744d"} Jan 21 15:57:39 crc kubenswrapper[4760]: I0121 15:57:39.030633 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rhdtg" Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991271 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" event={"ID":"42ddd7c5-3c8b-47e2-99df-1b4fc11fa349","Type":"ContainerStarted","Data":"33258780b5df1d92472fda96effe9538ef9123f2f6d4e2e0e9e667e75b1340b6"} Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991948 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:40 crc kubenswrapper[4760]: I0121 15:57:40.991987 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:41 crc kubenswrapper[4760]: I0121 15:57:41.025916 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" podStartSLOduration=8.025897733 podStartE2EDuration="8.025897733s" podCreationTimestamp="2026-01-21 15:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:57:41.025074163 +0000 UTC m=+631.692843741" watchObservedRunningTime="2026-01-21 15:57:41.025897733 +0000 UTC m=+631.693667311" Jan 21 15:57:41 crc kubenswrapper[4760]: I0121 15:57:41.029884 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:41 crc kubenswrapper[4760]: I0121 15:57:41.030019 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:57:46 crc kubenswrapper[4760]: I0121 15:57:46.623470 4760 scope.go:117] "RemoveContainer" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" Jan 21 15:57:46 crc kubenswrapper[4760]: E0121 15:57:46.624686 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dx99k_openshift-multus(7300c51f-415f-4696-bda1-a9e79ae5704a)\"" pod="openshift-multus/multus-dx99k" podUID="7300c51f-415f-4696-bda1-a9e79ae5704a" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.959860 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.959944 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.960006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.961061 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 15:57:50 crc kubenswrapper[4760]: I0121 15:57:50.961135 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a" gracePeriod=600 Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064363 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a" exitCode=0 Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064396 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a"} Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064941 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b"} Jan 21 15:57:53 crc kubenswrapper[4760]: I0121 15:57:53.064986 4760 scope.go:117] "RemoveContainer" containerID="e0b921ab21c8bb32b2a18330e6a05add434649cba02aa921e03391d26694c2f1" Jan 21 15:58:01 crc kubenswrapper[4760]: I0121 15:58:01.622375 4760 scope.go:117] "RemoveContainer" containerID="d068d702c3829273a54de5ce05bc939750eeed404a6fdced862bb6cd1f238505" Jan 21 15:58:02 crc kubenswrapper[4760]: I0121 15:58:02.134419 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 15:58:02 crc kubenswrapper[4760]: I0121 15:58:02.135653 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/1.log" Jan 21 15:58:02 crc kubenswrapper[4760]: I0121 15:58:02.135729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dx99k" event={"ID":"7300c51f-415f-4696-bda1-a9e79ae5704a","Type":"ContainerStarted","Data":"58bf81df4cfb4f016c7bed7f93d70210183d53b5ef55b904d5a9ba76a306dfde"} Jan 21 15:58:04 crc kubenswrapper[4760]: I0121 15:58:04.073367 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d7x4z" Jan 21 15:58:09 crc kubenswrapper[4760]: I0121 15:58:09.892819 4760 scope.go:117] "RemoveContainer" containerID="293f11d27cd6f37ed1446eb9d03303cd0d18c5e0c23fb8fce2818caaaab93cc5" Jan 21 15:58:12 crc kubenswrapper[4760]: I0121 15:58:12.198536 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.940167 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k"] Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.945443 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.971906 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:58:25 crc kubenswrapper[4760]: I0121 15:58:25.984569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k"] Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.073666 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.073761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.073867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.175876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.176037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.176103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.176938 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.177176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.196838 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.284882 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:26 crc kubenswrapper[4760]: I0121 15:58:26.511359 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k"] Jan 21 15:58:27 crc kubenswrapper[4760]: I0121 15:58:27.296776 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerStarted","Data":"4b9cda5371335e69679caa83decd0e935f42c9e5bb6bf54084ef03da6e5f82dd"} Jan 21 15:58:27 crc kubenswrapper[4760]: I0121 15:58:27.297308 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerStarted","Data":"4f6553116c74b36ea527f69cf1b03f497b57f99d305cc09973e112afd012972e"} Jan 21 15:58:28 crc kubenswrapper[4760]: I0121 15:58:28.304721 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerID="4b9cda5371335e69679caa83decd0e935f42c9e5bb6bf54084ef03da6e5f82dd" exitCode=0 Jan 21 15:58:28 crc kubenswrapper[4760]: I0121 15:58:28.305476 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"4b9cda5371335e69679caa83decd0e935f42c9e5bb6bf54084ef03da6e5f82dd"} Jan 21 15:58:33 crc kubenswrapper[4760]: I0121 15:58:33.336901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerStarted","Data":"1fedfb0a812f08a58bb3dc8d5b38aadedca5e473481f409bee1673a5ca6503da"} Jan 21 15:58:34 crc kubenswrapper[4760]: I0121 15:58:34.347801 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerID="1fedfb0a812f08a58bb3dc8d5b38aadedca5e473481f409bee1673a5ca6503da" exitCode=0 Jan 21 15:58:34 crc kubenswrapper[4760]: I0121 15:58:34.347947 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"1fedfb0a812f08a58bb3dc8d5b38aadedca5e473481f409bee1673a5ca6503da"} Jan 21 15:58:35 crc kubenswrapper[4760]: I0121 15:58:35.359483 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerID="22a51bc68126a6da5a729e533cc9a0c043dd9212c09cffe0d49330f855cf1977" exitCode=0 Jan 21 15:58:35 crc kubenswrapper[4760]: I0121 15:58:35.359594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"22a51bc68126a6da5a729e533cc9a0c043dd9212c09cffe0d49330f855cf1977"} Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.608573 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.741242 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") pod \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.741404 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") pod \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.741953 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle" (OuterVolumeSpecName: "bundle") pod "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" (UID: "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.742991 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") pod \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\" (UID: \"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3\") " Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.743372 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.748074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5" (OuterVolumeSpecName: "kube-api-access-mmfb5") pod "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" (UID: "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3"). InnerVolumeSpecName "kube-api-access-mmfb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.752619 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util" (OuterVolumeSpecName: "util") pod "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" (UID: "e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.844314 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmfb5\" (UniqueName: \"kubernetes.io/projected/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-kube-api-access-mmfb5\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:36 crc kubenswrapper[4760]: I0121 15:58:36.844401 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:58:37 crc kubenswrapper[4760]: I0121 15:58:37.374387 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" event={"ID":"e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3","Type":"ContainerDied","Data":"4f6553116c74b36ea527f69cf1b03f497b57f99d305cc09973e112afd012972e"} Jan 21 15:58:37 crc kubenswrapper[4760]: I0121 15:58:37.374434 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k" Jan 21 15:58:37 crc kubenswrapper[4760]: I0121 15:58:37.374454 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6553116c74b36ea527f69cf1b03f497b57f99d305cc09973e112afd012972e" Jan 21 15:58:37 crc kubenswrapper[4760]: E0121 15:58:37.448783 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3f1ab22_a6bd_4a89_9b50_38d3e2dab1a3.slice\": RecentStats: unable to find data in memory cache]" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547273 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-lrskp"] Jan 21 15:58:42 crc kubenswrapper[4760]: E0121 15:58:42.547796 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="extract" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547810 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="extract" Jan 21 15:58:42 crc kubenswrapper[4760]: E0121 15:58:42.547823 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="util" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547829 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="util" Jan 21 15:58:42 crc kubenswrapper[4760]: E0121 15:58:42.547848 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="pull" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547854 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="pull" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.547944 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3" containerName="extract" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.548351 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.550565 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.550621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.551513 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-hxjkx" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.577134 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-lrskp"] Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.628731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cwb\" (UniqueName: \"kubernetes.io/projected/f088d446-a779-4351-80aa-30d855335e4c-kube-api-access-q5cwb\") pod \"nmstate-operator-646758c888-lrskp\" (UID: \"f088d446-a779-4351-80aa-30d855335e4c\") " pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.730262 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5cwb\" (UniqueName: \"kubernetes.io/projected/f088d446-a779-4351-80aa-30d855335e4c-kube-api-access-q5cwb\") pod \"nmstate-operator-646758c888-lrskp\" (UID: \"f088d446-a779-4351-80aa-30d855335e4c\") " pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.756972 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5cwb\" (UniqueName: \"kubernetes.io/projected/f088d446-a779-4351-80aa-30d855335e4c-kube-api-access-q5cwb\") pod \"nmstate-operator-646758c888-lrskp\" (UID: \"f088d446-a779-4351-80aa-30d855335e4c\") " pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:42 crc kubenswrapper[4760]: I0121 15:58:42.866497 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" Jan 21 15:58:43 crc kubenswrapper[4760]: I0121 15:58:43.106176 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-lrskp"] Jan 21 15:58:43 crc kubenswrapper[4760]: I0121 15:58:43.423526 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" event={"ID":"f088d446-a779-4351-80aa-30d855335e4c","Type":"ContainerStarted","Data":"53e9daeb45454e5f70c7acded5a27a7a61f9b4c9cd0d287234fe736779a08ada"} Jan 21 15:58:45 crc kubenswrapper[4760]: I0121 15:58:45.450785 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" event={"ID":"f088d446-a779-4351-80aa-30d855335e4c","Type":"ContainerStarted","Data":"0c0904d8486a936db2005a2a6b9c27a238d7dfbdf6884777d64d949722d69df2"} Jan 21 15:58:45 crc kubenswrapper[4760]: I0121 15:58:45.467963 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-lrskp" podStartSLOduration=1.498692463 podStartE2EDuration="3.467938488s" podCreationTimestamp="2026-01-21 15:58:42 +0000 UTC" firstStartedPulling="2026-01-21 15:58:43.120699175 +0000 UTC m=+693.788468753" lastFinishedPulling="2026-01-21 15:58:45.0899452 +0000 UTC m=+695.757714778" observedRunningTime="2026-01-21 15:58:45.466837241 +0000 UTC m=+696.134606829" watchObservedRunningTime="2026-01-21 15:58:45.467938488 +0000 UTC m=+696.135708066" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.572165 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k5n9g"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.574011 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.578571 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-gcx9t" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.584903 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.585846 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.588165 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.614637 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-5b9fb"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.616222 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.621359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrtk\" (UniqueName: \"kubernetes.io/projected/80bcb070-867d-4d94-9f7b-73ff6c767a78-kube-api-access-mwrtk\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.621469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.621525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2bzn\" (UniqueName: \"kubernetes.io/projected/497fc134-f9a5-47ff-80ba-2c702922274a-kube-api-access-v2bzn\") pod \"nmstate-metrics-54757c584b-k5n9g\" (UID: \"497fc134-f9a5-47ff-80ba-2c702922274a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.651881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k5n9g"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.707494 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-ovs-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724130 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmzlr\" (UniqueName: \"kubernetes.io/projected/272d3255-cc65-43d6-89d6-37962ec071f1-kube-api-access-pmzlr\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724178 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-dbus-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2bzn\" (UniqueName: \"kubernetes.io/projected/497fc134-f9a5-47ff-80ba-2c702922274a-kube-api-access-v2bzn\") pod \"nmstate-metrics-54757c584b-k5n9g\" (UID: \"497fc134-f9a5-47ff-80ba-2c702922274a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724406 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrtk\" (UniqueName: \"kubernetes.io/projected/80bcb070-867d-4d94-9f7b-73ff6c767a78-kube-api-access-mwrtk\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.724435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-nmstate-lock\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: E0121 15:58:46.724972 4760 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 15:58:46 crc kubenswrapper[4760]: E0121 15:58:46.725062 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair podName:80bcb070-867d-4d94-9f7b-73ff6c767a78 nodeName:}" failed. No retries permitted until 2026-01-21 15:58:47.225033823 +0000 UTC m=+697.892803401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-v2hbl" (UID: "80bcb070-867d-4d94-9f7b-73ff6c767a78") : secret "openshift-nmstate-webhook" not found Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.752870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrtk\" (UniqueName: \"kubernetes.io/projected/80bcb070-867d-4d94-9f7b-73ff6c767a78-kube-api-access-mwrtk\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.754187 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2bzn\" (UniqueName: \"kubernetes.io/projected/497fc134-f9a5-47ff-80ba-2c702922274a-kube-api-access-v2bzn\") pod \"nmstate-metrics-54757c584b-k5n9g\" (UID: \"497fc134-f9a5-47ff-80ba-2c702922274a\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.778813 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.780105 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.785723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.785941 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.786171 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fmdtb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.800670 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw"] Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826350 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-nmstate-lock\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-ovs-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826527 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b83e6b43-dd2e-439e-afb2-e168dcd42605-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826559 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqh5b\" (UniqueName: \"kubernetes.io/projected/b83e6b43-dd2e-439e-afb2-e168dcd42605-kube-api-access-pqh5b\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826591 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-nmstate-lock\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmzlr\" (UniqueName: \"kubernetes.io/projected/272d3255-cc65-43d6-89d6-37962ec071f1-kube-api-access-pmzlr\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b83e6b43-dd2e-439e-afb2-e168dcd42605-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.826941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-dbus-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.827103 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-ovs-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.827628 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/272d3255-cc65-43d6-89d6-37962ec071f1-dbus-socket\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.857008 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmzlr\" (UniqueName: \"kubernetes.io/projected/272d3255-cc65-43d6-89d6-37962ec071f1-kube-api-access-pmzlr\") pod \"nmstate-handler-5b9fb\" (UID: \"272d3255-cc65-43d6-89d6-37962ec071f1\") " pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.890453 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.928471 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b83e6b43-dd2e-439e-afb2-e168dcd42605-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.928626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b83e6b43-dd2e-439e-afb2-e168dcd42605-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.928658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqh5b\" (UniqueName: \"kubernetes.io/projected/b83e6b43-dd2e-439e-afb2-e168dcd42605-kube-api-access-pqh5b\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.929718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b83e6b43-dd2e-439e-afb2-e168dcd42605-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.933578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.935580 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b83e6b43-dd2e-439e-afb2-e168dcd42605-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.956385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqh5b\" (UniqueName: \"kubernetes.io/projected/b83e6b43-dd2e-439e-afb2-e168dcd42605-kube-api-access-pqh5b\") pod \"nmstate-console-plugin-7754f76f8b-gwfqw\" (UID: \"b83e6b43-dd2e-439e-afb2-e168dcd42605\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.996472 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7dd4888dc-rtt7t"] Jan 21 15:58:46 crc kubenswrapper[4760]: W0121 15:58:46.996597 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod272d3255_cc65_43d6_89d6_37962ec071f1.slice/crio-b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d WatchSource:0}: Error finding container b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d: Status 404 returned error can't find the container with id b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d Jan 21 15:58:46 crc kubenswrapper[4760]: I0121 15:58:46.997441 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.022395 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dd4888dc-rtt7t"] Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.118963 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-oauth-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-trusted-ca-bundle\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139767 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-service-ca\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139788 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-oauth-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139810 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139825 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnxv\" (UniqueName: \"kubernetes.io/projected/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-kube-api-access-fwnxv\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.139860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.240812 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-service-ca\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241153 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-oauth-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnxv\" (UniqueName: \"kubernetes.io/projected/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-kube-api-access-fwnxv\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241261 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.241955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-service-ca\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242021 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242032 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-oauth-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242076 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-trusted-ca-bundle\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242112 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.242771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-oauth-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.243916 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-trusted-ca-bundle\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.246768 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-oauth-config\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.247746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-console-serving-cert\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.247775 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k5n9g"] Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.248826 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/80bcb070-867d-4d94-9f7b-73ff6c767a78-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-v2hbl\" (UID: \"80bcb070-867d-4d94-9f7b-73ff6c767a78\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.261538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnxv\" (UniqueName: \"kubernetes.io/projected/7edb4317-2fa7-48c9-ba5b-45fb8d9625c0-kube-api-access-fwnxv\") pod \"console-7dd4888dc-rtt7t\" (UID: \"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0\") " pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.325213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.347191 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw"] Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.464961 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5b9fb" event={"ID":"272d3255-cc65-43d6-89d6-37962ec071f1","Type":"ContainerStarted","Data":"b955b8dfa450b79f5cbff3baaacd96e681077abc918b47318e8ef628ba3b3b7d"} Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.466351 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" event={"ID":"497fc134-f9a5-47ff-80ba-2c702922274a","Type":"ContainerStarted","Data":"7cdb5bfdb40cf1fbe89b036ead413b2ae4c6797ae510902e99f9a31d115ffa94"} Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.467206 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" event={"ID":"b83e6b43-dd2e-439e-afb2-e168dcd42605","Type":"ContainerStarted","Data":"042807e2742ede90efc6d7845d7b3968a819f5b7fc4cbfff24f4b9b312b2ab2f"} Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.501043 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.560275 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7dd4888dc-rtt7t"] Jan 21 15:58:47 crc kubenswrapper[4760]: W0121 15:58:47.575850 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7edb4317_2fa7_48c9_ba5b_45fb8d9625c0.slice/crio-10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8 WatchSource:0}: Error finding container 10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8: Status 404 returned error can't find the container with id 10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8 Jan 21 15:58:47 crc kubenswrapper[4760]: I0121 15:58:47.724107 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl"] Jan 21 15:58:47 crc kubenswrapper[4760]: W0121 15:58:47.733490 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80bcb070_867d_4d94_9f7b_73ff6c767a78.slice/crio-6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277 WatchSource:0}: Error finding container 6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277: Status 404 returned error can't find the container with id 6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277 Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.476505 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dd4888dc-rtt7t" event={"ID":"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0","Type":"ContainerStarted","Data":"50c6cdefc57194adf7a01d2899d379080ff540d9a24190be2a8fb62681827ba7"} Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.476594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7dd4888dc-rtt7t" event={"ID":"7edb4317-2fa7-48c9-ba5b-45fb8d9625c0","Type":"ContainerStarted","Data":"10423ec9e8020b400abc0d9b2e7073d1b548f05f80496a909ef88515d3e29de8"} Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.480487 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" event={"ID":"80bcb070-867d-4d94-9f7b-73ff6c767a78","Type":"ContainerStarted","Data":"6968d2e9df93b27697ac06a052c36387ce2bab11e8d5742c5ab05f07f10a0277"} Jan 21 15:58:48 crc kubenswrapper[4760]: I0121 15:58:48.501684 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7dd4888dc-rtt7t" podStartSLOduration=2.5016496200000002 podStartE2EDuration="2.50164962s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 15:58:48.497305673 +0000 UTC m=+699.165075271" watchObservedRunningTime="2026-01-21 15:58:48.50164962 +0000 UTC m=+699.169419198" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.580108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-5b9fb" event={"ID":"272d3255-cc65-43d6-89d6-37962ec071f1","Type":"ContainerStarted","Data":"cd6c92454226f2ebcbf18f6499e8016020b64d05c633f37c6689a8c01844e8ae"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.581073 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.584764 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" event={"ID":"80bcb070-867d-4d94-9f7b-73ff6c767a78","Type":"ContainerStarted","Data":"e5ad803d3e1d759fe77deeac9cb8cd139e781303d1f1b1732c19927c2eec042f"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.584973 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.594593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" event={"ID":"497fc134-f9a5-47ff-80ba-2c702922274a","Type":"ContainerStarted","Data":"652c5e0e09abbcaad491091420577626db7880500dc9d0fbc4a55a8035fe3524"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.599751 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" event={"ID":"b83e6b43-dd2e-439e-afb2-e168dcd42605","Type":"ContainerStarted","Data":"647e8039efc7a55d31bb5eb7847aa13371eed420ac665abb1b437dc08fd77e47"} Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.604699 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-5b9fb" podStartSLOduration=1.553327914 podStartE2EDuration="10.604674937s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.009151947 +0000 UTC m=+697.676921525" lastFinishedPulling="2026-01-21 15:58:56.06049897 +0000 UTC m=+706.728268548" observedRunningTime="2026-01-21 15:58:56.597802679 +0000 UTC m=+707.265572277" watchObservedRunningTime="2026-01-21 15:58:56.604674937 +0000 UTC m=+707.272444515" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.652907 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" podStartSLOduration=2.334087207 podStartE2EDuration="10.65285585s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.735598307 +0000 UTC m=+698.403367885" lastFinishedPulling="2026-01-21 15:58:56.05436695 +0000 UTC m=+706.722136528" observedRunningTime="2026-01-21 15:58:56.621769277 +0000 UTC m=+707.289538855" watchObservedRunningTime="2026-01-21 15:58:56.65285585 +0000 UTC m=+707.320625448" Jan 21 15:58:56 crc kubenswrapper[4760]: I0121 15:58:56.657933 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-gwfqw" podStartSLOduration=1.94798785 podStartE2EDuration="10.657915504s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.350289929 +0000 UTC m=+698.018059517" lastFinishedPulling="2026-01-21 15:58:56.060217603 +0000 UTC m=+706.727987171" observedRunningTime="2026-01-21 15:58:56.654108381 +0000 UTC m=+707.321877959" watchObservedRunningTime="2026-01-21 15:58:56.657915504 +0000 UTC m=+707.325685092" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.326008 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.326074 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.331887 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.615173 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7dd4888dc-rtt7t" Jan 21 15:58:57 crc kubenswrapper[4760]: I0121 15:58:57.675057 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:58:59 crc kubenswrapper[4760]: I0121 15:58:59.631931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" event={"ID":"497fc134-f9a5-47ff-80ba-2c702922274a","Type":"ContainerStarted","Data":"84dc8caa0040d3a55de0251b578d03de0c048075476573e2fe847c1310ef2572"} Jan 21 15:58:59 crc kubenswrapper[4760]: I0121 15:58:59.647537 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-k5n9g" podStartSLOduration=2.060913432 podStartE2EDuration="13.647515093s" podCreationTimestamp="2026-01-21 15:58:46 +0000 UTC" firstStartedPulling="2026-01-21 15:58:47.261119911 +0000 UTC m=+697.928889489" lastFinishedPulling="2026-01-21 15:58:58.847721572 +0000 UTC m=+709.515491150" observedRunningTime="2026-01-21 15:58:59.644209962 +0000 UTC m=+710.311979540" watchObservedRunningTime="2026-01-21 15:58:59.647515093 +0000 UTC m=+710.315284671" Jan 21 15:59:01 crc kubenswrapper[4760]: I0121 15:59:01.967151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-5b9fb" Jan 21 15:59:07 crc kubenswrapper[4760]: I0121 15:59:07.507626 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-v2hbl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.477801 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl"] Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.480289 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.483130 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.497277 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl"] Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.579664 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.580029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.580440 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.682200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.682296 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.682478 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.683638 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.683683 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.706433 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:20 crc kubenswrapper[4760]: I0121 15:59:20.800392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.079543 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl"] Jan 21 15:59:21 crc kubenswrapper[4760]: W0121 15:59:21.086069 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e5baeb_8bc7_4f75_bfcf_5128246fe0af.slice/crio-a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab WatchSource:0}: Error finding container a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab: Status 404 returned error can't find the container with id a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.764601 4760 generic.go:334] "Generic (PLEG): container finished" podID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerID="bb7e7014a435de7a09f177cebaf4dad30d043efe7a7bef88be79b6b27b72c5e7" exitCode=0 Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.764662 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"bb7e7014a435de7a09f177cebaf4dad30d043efe7a7bef88be79b6b27b72c5e7"} Jan 21 15:59:21 crc kubenswrapper[4760]: I0121 15:59:21.764927 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerStarted","Data":"a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab"} Jan 21 15:59:22 crc kubenswrapper[4760]: I0121 15:59:22.728519 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-clnlg" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" containerID="cri-o://962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" gracePeriod=15 Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.110065 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-clnlg_dca5ed86-6716-40a8-a0d9-b403b3d3edd2/console/0.log" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.110353 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.221308 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.221860 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.221930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222122 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222191 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222227 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") pod \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\" (UID: \"dca5ed86-6716-40a8-a0d9-b403b3d3edd2\") " Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca" (OuterVolumeSpecName: "service-ca") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222807 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.222980 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.223152 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config" (OuterVolumeSpecName: "console-config") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.229888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59" (OuterVolumeSpecName: "kube-api-access-zgh59") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "kube-api-access-zgh59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.233087 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.234741 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dca5ed86-6716-40a8-a0d9-b403b3d3edd2" (UID: "dca5ed86-6716-40a8-a0d9-b403b3d3edd2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323839 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgh59\" (UniqueName: \"kubernetes.io/projected/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-kube-api-access-zgh59\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323890 4760 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323904 4760 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323917 4760 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323931 4760 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323947 4760 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.323959 4760 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dca5ed86-6716-40a8-a0d9-b403b3d3edd2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.779719 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-clnlg_dca5ed86-6716-40a8-a0d9-b403b3d3edd2/console/0.log" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.779788 4760 generic.go:334] "Generic (PLEG): container finished" podID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" exitCode=2 Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.779952 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-clnlg" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.780442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerDied","Data":"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938"} Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.780637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-clnlg" event={"ID":"dca5ed86-6716-40a8-a0d9-b403b3d3edd2","Type":"ContainerDied","Data":"0281b55255f4efb1b0f1c85ffa5cb54711c643739e6e91bd25c713e06089b8a2"} Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.780724 4760 scope.go:117] "RemoveContainer" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.783008 4760 generic.go:334] "Generic (PLEG): container finished" podID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerID="6a447ccd9506c95e3fcdd39cf43456f281f36774e2234585eefcb088d64d1e6d" exitCode=0 Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.783038 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"6a447ccd9506c95e3fcdd39cf43456f281f36774e2234585eefcb088d64d1e6d"} Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.802958 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.808490 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-clnlg"] Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.825297 4760 scope.go:117] "RemoveContainer" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" Jan 21 15:59:23 crc kubenswrapper[4760]: E0121 15:59:23.825879 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938\": container with ID starting with 962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938 not found: ID does not exist" containerID="962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938" Jan 21 15:59:23 crc kubenswrapper[4760]: I0121 15:59:23.825918 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938"} err="failed to get container status \"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938\": rpc error: code = NotFound desc = could not find container \"962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938\": container with ID starting with 962db233844f01b29292400cdf37469718690a355d43834f3bbbce67ad03b938 not found: ID does not exist" Jan 21 15:59:25 crc kubenswrapper[4760]: I0121 15:59:25.633623 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" path="/var/lib/kubelet/pods/dca5ed86-6716-40a8-a0d9-b403b3d3edd2/volumes" Jan 21 15:59:25 crc kubenswrapper[4760]: I0121 15:59:25.799288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerStarted","Data":"5b2392c73859f6a0516fbcbb30e443cfd1b5a07ec264d6cad8095e7372f28656"} Jan 21 15:59:25 crc kubenswrapper[4760]: I0121 15:59:25.821582 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" podStartSLOduration=4.906154429 podStartE2EDuration="5.821555844s" podCreationTimestamp="2026-01-21 15:59:20 +0000 UTC" firstStartedPulling="2026-01-21 15:59:21.766526739 +0000 UTC m=+732.434296317" lastFinishedPulling="2026-01-21 15:59:22.681928154 +0000 UTC m=+733.349697732" observedRunningTime="2026-01-21 15:59:25.819456405 +0000 UTC m=+736.487225983" watchObservedRunningTime="2026-01-21 15:59:25.821555844 +0000 UTC m=+736.489325432" Jan 21 15:59:26 crc kubenswrapper[4760]: I0121 15:59:26.811812 4760 generic.go:334] "Generic (PLEG): container finished" podID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerID="5b2392c73859f6a0516fbcbb30e443cfd1b5a07ec264d6cad8095e7372f28656" exitCode=0 Jan 21 15:59:26 crc kubenswrapper[4760]: I0121 15:59:26.811911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"5b2392c73859f6a0516fbcbb30e443cfd1b5a07ec264d6cad8095e7372f28656"} Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.065174 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.092865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") pod \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.093040 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") pod \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.093083 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") pod \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\" (UID: \"11e5baeb-8bc7-4f75-bfcf-5128246fe0af\") " Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.095365 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle" (OuterVolumeSpecName: "bundle") pod "11e5baeb-8bc7-4f75-bfcf-5128246fe0af" (UID: "11e5baeb-8bc7-4f75-bfcf-5128246fe0af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.099888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m" (OuterVolumeSpecName: "kube-api-access-xfd7m") pod "11e5baeb-8bc7-4f75-bfcf-5128246fe0af" (UID: "11e5baeb-8bc7-4f75-bfcf-5128246fe0af"). InnerVolumeSpecName "kube-api-access-xfd7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.102487 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util" (OuterVolumeSpecName: "util") pod "11e5baeb-8bc7-4f75-bfcf-5128246fe0af" (UID: "11e5baeb-8bc7-4f75-bfcf-5128246fe0af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.195570 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-util\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.195749 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.195772 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfd7m\" (UniqueName: \"kubernetes.io/projected/11e5baeb-8bc7-4f75-bfcf-5128246fe0af-kube-api-access-xfd7m\") on node \"crc\" DevicePath \"\"" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.831168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" event={"ID":"11e5baeb-8bc7-4f75-bfcf-5128246fe0af","Type":"ContainerDied","Data":"a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab"} Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.831228 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31f64809458a75bad5df8458fd9e059e3216239749c999123cb988bff8531ab" Jan 21 15:59:28 crc kubenswrapper[4760]: I0121 15:59:28.831482 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.567652 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4"] Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568446 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="extract" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568462 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="extract" Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568475 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="util" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568482 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="util" Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568500 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" Jan 21 15:59:38 crc kubenswrapper[4760]: E0121 15:59:38.568522 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="pull" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="pull" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568632 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e5baeb-8bc7-4f75-bfcf-5128246fe0af" containerName="extract" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.568640 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca5ed86-6716-40a8-a0d9-b403b3d3edd2" containerName="console" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.569075 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.571463 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.571562 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.572019 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cdxqq" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.573551 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.576896 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.586992 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4"] Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.762609 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.762691 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4c9\" (UniqueName: \"kubernetes.io/projected/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-kube-api-access-xv4c9\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.762948 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-webhook-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.802011 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-746c87857b-5gngc"] Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.802964 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.805950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.806107 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9p7n7" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.807068 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.825511 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-746c87857b-5gngc"] Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.864290 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.864388 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4c9\" (UniqueName: \"kubernetes.io/projected/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-kube-api-access-xv4c9\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.864440 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-webhook-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.878721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.878736 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-webhook-cert\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.883536 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4c9\" (UniqueName: \"kubernetes.io/projected/18110c9f-5a23-4a4c-9b39-289c23ff6e1c-kube-api-access-xv4c9\") pod \"metallb-operator-controller-manager-6c4667f969-l2pv4\" (UID: \"18110c9f-5a23-4a4c-9b39-289c23ff6e1c\") " pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.890027 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.965755 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-webhook-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.965885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-apiservice-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:38 crc kubenswrapper[4760]: I0121 15:59:38.965919 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2hv\" (UniqueName: \"kubernetes.io/projected/280fc33b-ec55-41cd-92e4-17ed099904a0-kube-api-access-bw2hv\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.067198 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-webhook-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.067294 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-apiservice-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.067403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2hv\" (UniqueName: \"kubernetes.io/projected/280fc33b-ec55-41cd-92e4-17ed099904a0-kube-api-access-bw2hv\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.072220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-apiservice-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.083506 4760 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.083794 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/280fc33b-ec55-41cd-92e4-17ed099904a0-webhook-cert\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.113466 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2hv\" (UniqueName: \"kubernetes.io/projected/280fc33b-ec55-41cd-92e4-17ed099904a0-kube-api-access-bw2hv\") pod \"metallb-operator-webhook-server-746c87857b-5gngc\" (UID: \"280fc33b-ec55-41cd-92e4-17ed099904a0\") " pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.119047 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.432564 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4"] Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.611287 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-746c87857b-5gngc"] Jan 21 15:59:39 crc kubenswrapper[4760]: W0121 15:59:39.618116 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod280fc33b_ec55_41cd_92e4_17ed099904a0.slice/crio-4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e WatchSource:0}: Error finding container 4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e: Status 404 returned error can't find the container with id 4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.910754 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" event={"ID":"18110c9f-5a23-4a4c-9b39-289c23ff6e1c","Type":"ContainerStarted","Data":"c221fde29800c6cc0c0dfcb1a82aa3014246dab470e58418f94598436ee2e3ed"} Jan 21 15:59:39 crc kubenswrapper[4760]: I0121 15:59:39.912727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" event={"ID":"280fc33b-ec55-41cd-92e4-17ed099904a0","Type":"ContainerStarted","Data":"4eaead13f584c4a9d99be93cda8c17c0ae8a96e86d070a1e5e05b369cf9a612e"} Jan 21 15:59:47 crc kubenswrapper[4760]: I0121 15:59:47.968921 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" event={"ID":"18110c9f-5a23-4a4c-9b39-289c23ff6e1c","Type":"ContainerStarted","Data":"3a2570d30c146409622d371265afe673bc1381d5535484ff80ce25e1caea2e7d"} Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.257620 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" event={"ID":"280fc33b-ec55-41cd-92e4-17ed099904a0","Type":"ContainerStarted","Data":"bc7775d0a8b7c5805412c9c59c16fd5fd1e07eb1fe18ba2f2bc7abcbfbaf7a28"} Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.258001 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.258021 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.285963 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" podStartSLOduration=4.2840119770000005 podStartE2EDuration="12.285787143s" podCreationTimestamp="2026-01-21 15:59:38 +0000 UTC" firstStartedPulling="2026-01-21 15:59:39.467343532 +0000 UTC m=+750.135113100" lastFinishedPulling="2026-01-21 15:59:47.469118688 +0000 UTC m=+758.136888266" observedRunningTime="2026-01-21 15:59:50.279512945 +0000 UTC m=+760.947282523" watchObservedRunningTime="2026-01-21 15:59:50.285787143 +0000 UTC m=+760.953556721" Jan 21 15:59:50 crc kubenswrapper[4760]: I0121 15:59:50.308785 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" podStartSLOduration=4.439178055 podStartE2EDuration="12.308756242s" podCreationTimestamp="2026-01-21 15:59:38 +0000 UTC" firstStartedPulling="2026-01-21 15:59:39.622364007 +0000 UTC m=+750.290133585" lastFinishedPulling="2026-01-21 15:59:47.491942194 +0000 UTC m=+758.159711772" observedRunningTime="2026-01-21 15:59:50.306827548 +0000 UTC m=+760.974597126" watchObservedRunningTime="2026-01-21 15:59:50.308756242 +0000 UTC m=+760.976525830" Jan 21 15:59:59 crc kubenswrapper[4760]: I0121 15:59:59.125888 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-746c87857b-5gngc" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.175263 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.176196 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.178506 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.178542 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.195140 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.266216 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.266252 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.266279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.367357 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.367415 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.367455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.368935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.388164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.397646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"collect-profiles-29483520-59n5f\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.497005 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:00 crc kubenswrapper[4760]: I0121 16:00:00.714860 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:00:01 crc kubenswrapper[4760]: I0121 16:00:01.331201 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b71e327-2590-4a0d-8f08-44d58d095169" containerID="ef02e145078e842ec9d815a9c5581b8d539b4a39bb6283ec22a7de868f0aab8d" exitCode=0 Jan 21 16:00:01 crc kubenswrapper[4760]: I0121 16:00:01.331257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" event={"ID":"2b71e327-2590-4a0d-8f08-44d58d095169","Type":"ContainerDied","Data":"ef02e145078e842ec9d815a9c5581b8d539b4a39bb6283ec22a7de868f0aab8d"} Jan 21 16:00:01 crc kubenswrapper[4760]: I0121 16:00:01.331292 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" event={"ID":"2b71e327-2590-4a0d-8f08-44d58d095169","Type":"ContainerStarted","Data":"25f35d08976da2f4c23669eba240bd7ccb44f5613d77c8a65656dc4facd6d643"} Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.644721 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.697797 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") pod \"2b71e327-2590-4a0d-8f08-44d58d095169\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.697914 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") pod \"2b71e327-2590-4a0d-8f08-44d58d095169\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.697940 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") pod \"2b71e327-2590-4a0d-8f08-44d58d095169\" (UID: \"2b71e327-2590-4a0d-8f08-44d58d095169\") " Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.699261 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b71e327-2590-4a0d-8f08-44d58d095169" (UID: "2b71e327-2590-4a0d-8f08-44d58d095169"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.704410 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b71e327-2590-4a0d-8f08-44d58d095169" (UID: "2b71e327-2590-4a0d-8f08-44d58d095169"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.738213 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk" (OuterVolumeSpecName: "kube-api-access-4pcnk") pod "2b71e327-2590-4a0d-8f08-44d58d095169" (UID: "2b71e327-2590-4a0d-8f08-44d58d095169"). InnerVolumeSpecName "kube-api-access-4pcnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.799108 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b71e327-2590-4a0d-8f08-44d58d095169-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.799152 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pcnk\" (UniqueName: \"kubernetes.io/projected/2b71e327-2590-4a0d-8f08-44d58d095169-kube-api-access-4pcnk\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:02 crc kubenswrapper[4760]: I0121 16:00:02.799168 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b71e327-2590-4a0d-8f08-44d58d095169-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:03 crc kubenswrapper[4760]: I0121 16:00:03.344441 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" Jan 21 16:00:03 crc kubenswrapper[4760]: I0121 16:00:03.345483 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f" event={"ID":"2b71e327-2590-4a0d-8f08-44d58d095169","Type":"ContainerDied","Data":"25f35d08976da2f4c23669eba240bd7ccb44f5613d77c8a65656dc4facd6d643"} Jan 21 16:00:03 crc kubenswrapper[4760]: I0121 16:00:03.345563 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f35d08976da2f4c23669eba240bd7ccb44f5613d77c8a65656dc4facd6d643" Jan 21 16:00:18 crc kubenswrapper[4760]: I0121 16:00:18.893039 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c4667f969-l2pv4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.764994 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gsbq4"] Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.765610 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" containerName="collect-profiles" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.765633 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" containerName="collect-profiles" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.765773 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" containerName="collect-profiles" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.768638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.771878 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.772102 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7nqq8" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.772236 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.776102 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.779454 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.784849 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.787012 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.839776 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-d6jcx"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.840618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.841973 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics-certs\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842010 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-conf\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe6716c-6a30-454c-979c-59566d2c29b6-metallb-excludel2\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842079 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-sockets\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842092 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-reloader\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842135 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfzfn\" (UniqueName: \"kubernetes.io/projected/120c759b-d895-4898-a35a-2c7f74bb71b2-kube-api-access-vfzfn\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842238 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87tg\" (UniqueName: \"kubernetes.io/projected/5f599753-8125-400e-b9dd-f94bee01fdf8-kube-api-access-c87tg\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842288 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/120c759b-d895-4898-a35a-2c7f74bb71b2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842372 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-startup\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842389 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdlt\" (UniqueName: \"kubernetes.io/projected/dbe6716c-6a30-454c-979c-59566d2c29b6-kube-api-access-lzdlt\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.842918 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-56w26" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.843054 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.843455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.843845 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.864402 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-skl79"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.865400 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.867656 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.899811 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-skl79"] Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.943868 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7fr\" (UniqueName: \"kubernetes.io/projected/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-kube-api-access-7d7fr\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.943943 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-startup\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.943969 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdlt\" (UniqueName: \"kubernetes.io/projected/dbe6716c-6a30-454c-979c-59566d2c29b6-kube-api-access-lzdlt\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics-certs\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944034 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-conf\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944059 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-cert\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944079 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe6716c-6a30-454c-979c-59566d2c29b6-metallb-excludel2\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944101 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944122 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-sockets\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-reloader\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944161 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfzfn\" (UniqueName: \"kubernetes.io/projected/120c759b-d895-4898-a35a-2c7f74bb71b2-kube-api-access-vfzfn\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944244 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c87tg\" (UniqueName: \"kubernetes.io/projected/5f599753-8125-400e-b9dd-f94bee01fdf8-kube-api-access-c87tg\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944265 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.944298 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/120c759b-d895-4898-a35a-2c7f74bb71b2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.945714 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-startup\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.945734 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-sockets\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.946281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-reloader\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.946384 4760 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.946440 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs podName:dbe6716c-6a30-454c-979c-59566d2c29b6 nodeName:}" failed. No retries permitted until 2026-01-21 16:00:20.446422209 +0000 UTC m=+791.114191877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs") pod "speaker-d6jcx" (UID: "dbe6716c-6a30-454c-979c-59566d2c29b6") : secret "speaker-certs-secret" not found Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.946814 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.947208 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 16:00:19 crc kubenswrapper[4760]: E0121 16:00:19.947247 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist podName:dbe6716c-6a30-454c-979c-59566d2c29b6 nodeName:}" failed. No retries permitted until 2026-01-21 16:00:20.44723687 +0000 UTC m=+791.115006538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist") pod "speaker-d6jcx" (UID: "dbe6716c-6a30-454c-979c-59566d2c29b6") : secret "metallb-memberlist" not found Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.947495 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5f599753-8125-400e-b9dd-f94bee01fdf8-frr-conf\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.954023 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f599753-8125-400e-b9dd-f94bee01fdf8-metrics-certs\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.957293 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/dbe6716c-6a30-454c-979c-59566d2c29b6-metallb-excludel2\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.964891 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c87tg\" (UniqueName: \"kubernetes.io/projected/5f599753-8125-400e-b9dd-f94bee01fdf8-kube-api-access-c87tg\") pod \"frr-k8s-gsbq4\" (UID: \"5f599753-8125-400e-b9dd-f94bee01fdf8\") " pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.965156 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/120c759b-d895-4898-a35a-2c7f74bb71b2-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.969214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdlt\" (UniqueName: \"kubernetes.io/projected/dbe6716c-6a30-454c-979c-59566d2c29b6-kube-api-access-lzdlt\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:19 crc kubenswrapper[4760]: I0121 16:00:19.973754 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfzfn\" (UniqueName: \"kubernetes.io/projected/120c759b-d895-4898-a35a-2c7f74bb71b2-kube-api-access-vfzfn\") pod \"frr-k8s-webhook-server-7df86c4f6c-8cr5r\" (UID: \"120c759b-d895-4898-a35a-2c7f74bb71b2\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.044962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d7fr\" (UniqueName: \"kubernetes.io/projected/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-kube-api-access-7d7fr\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.045027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-cert\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.045086 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.045198 4760 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.045248 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs podName:57bfd668-6e8b-475a-99b4-cdbd22c9c19f nodeName:}" failed. No retries permitted until 2026-01-21 16:00:20.545233739 +0000 UTC m=+791.213003317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs") pod "controller-6968d8fdc4-skl79" (UID: "57bfd668-6e8b-475a-99b4-cdbd22c9c19f") : secret "controller-certs-secret" not found Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.049106 4760 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.059168 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-cert\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.064413 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d7fr\" (UniqueName: \"kubernetes.io/projected/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-kube-api-access-7d7fr\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.091998 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.105340 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.346521 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r"] Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.437816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" event={"ID":"120c759b-d895-4898-a35a-2c7f74bb71b2","Type":"ContainerStarted","Data":"93ca1b00c2bdc337c493a6fd90d4066b35df7e1f08c3b139fc114b3a1beff013"} Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.439970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"760c9d8442ac1332fe694a68d200ad111d4116c090323f1729fa8dab9b7f08e2"} Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.449980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.450037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.450484 4760 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 16:00:20 crc kubenswrapper[4760]: E0121 16:00:20.450707 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist podName:dbe6716c-6a30-454c-979c-59566d2c29b6 nodeName:}" failed. No retries permitted until 2026-01-21 16:00:21.450678666 +0000 UTC m=+792.118448284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist") pod "speaker-d6jcx" (UID: "dbe6716c-6a30-454c-979c-59566d2c29b6") : secret "metallb-memberlist" not found Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.455578 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-metrics-certs\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.550985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.553904 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57bfd668-6e8b-475a-99b4-cdbd22c9c19f-metrics-certs\") pod \"controller-6968d8fdc4-skl79\" (UID: \"57bfd668-6e8b-475a-99b4-cdbd22c9c19f\") " pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.794094 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.946265 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:20 crc kubenswrapper[4760]: I0121 16:00:20.946361 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.046391 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-skl79"] Jan 21 16:00:21 crc kubenswrapper[4760]: W0121 16:00:21.052674 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57bfd668_6e8b_475a_99b4_cdbd22c9c19f.slice/crio-adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35 WatchSource:0}: Error finding container adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35: Status 404 returned error can't find the container with id adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35 Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446039 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skl79" event={"ID":"57bfd668-6e8b-475a-99b4-cdbd22c9c19f","Type":"ContainerStarted","Data":"b7b7f34b41e5f16566a97d050657a7807214d4a75858256ea7a5df0588ed2d7c"} Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skl79" event={"ID":"57bfd668-6e8b-475a-99b4-cdbd22c9c19f","Type":"ContainerStarted","Data":"7f8527f913ed6e8e3f64935f0183fcb8af6f15c5a7f476a475ed4bfdbd88a682"} Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-skl79" event={"ID":"57bfd668-6e8b-475a-99b4-cdbd22c9c19f","Type":"ContainerStarted","Data":"adcfc14ad6d1b8ddc46608312a7e792e17261431fb2eb7e8dcf8152e8dbc1e35"} Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.446747 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.459193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.463095 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-skl79" podStartSLOduration=2.4630712519999998 podStartE2EDuration="2.463071252s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:00:21.460636439 +0000 UTC m=+792.128406037" watchObservedRunningTime="2026-01-21 16:00:21.463071252 +0000 UTC m=+792.130840830" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.474336 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/dbe6716c-6a30-454c-979c-59566d2c29b6-memberlist\") pod \"speaker-d6jcx\" (UID: \"dbe6716c-6a30-454c-979c-59566d2c29b6\") " pod="metallb-system/speaker-d6jcx" Jan 21 16:00:21 crc kubenswrapper[4760]: I0121 16:00:21.652799 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:21 crc kubenswrapper[4760]: W0121 16:00:21.678596 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbe6716c_6a30_454c_979c_59566d2c29b6.slice/crio-63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092 WatchSource:0}: Error finding container 63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092: Status 404 returned error can't find the container with id 63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092 Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.659592 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d6jcx" event={"ID":"dbe6716c-6a30-454c-979c-59566d2c29b6","Type":"ContainerStarted","Data":"e0a3e17d06ba892f9b2af39e93b4f95f99d9853362a123fcc72673e77438aa84"} Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.659948 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d6jcx" event={"ID":"dbe6716c-6a30-454c-979c-59566d2c29b6","Type":"ContainerStarted","Data":"4eb6fd0e88c788399b5c0fcf408fa9d6b767735d089fca40e21e20bcc5cdea5e"} Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.659960 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-d6jcx" event={"ID":"dbe6716c-6a30-454c-979c-59566d2c29b6","Type":"ContainerStarted","Data":"63451f0a278b9521bc8fde0013733bbacc5e25ef5aed50ce1d91575529775092"} Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.660144 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:22 crc kubenswrapper[4760]: I0121 16:00:22.689816 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-d6jcx" podStartSLOduration=3.689795676 podStartE2EDuration="3.689795676s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:00:22.687736653 +0000 UTC m=+793.355506231" watchObservedRunningTime="2026-01-21 16:00:22.689795676 +0000 UTC m=+793.357565254" Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.760656 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" event={"ID":"120c759b-d895-4898-a35a-2c7f74bb71b2","Type":"ContainerStarted","Data":"90e5695c7d55d28a0ef54094cfb610b0cec5cc7bc1583a9a8f6778486a325ce1"} Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.761727 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.763928 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f599753-8125-400e-b9dd-f94bee01fdf8" containerID="9e48e9529d9bdec72844040db90eb9ba94f64a53c0337587699b657818c77415" exitCode=0 Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.763992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerDied","Data":"9e48e9529d9bdec72844040db90eb9ba94f64a53c0337587699b657818c77415"} Jan 21 16:00:29 crc kubenswrapper[4760]: I0121 16:00:29.785209 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" podStartSLOduration=2.107999251 podStartE2EDuration="10.785177017s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="2026-01-21 16:00:20.356845406 +0000 UTC m=+791.024614984" lastFinishedPulling="2026-01-21 16:00:29.034023172 +0000 UTC m=+799.701792750" observedRunningTime="2026-01-21 16:00:29.783671028 +0000 UTC m=+800.451440666" watchObservedRunningTime="2026-01-21 16:00:29.785177017 +0000 UTC m=+800.452946615" Jan 21 16:00:30 crc kubenswrapper[4760]: I0121 16:00:30.773529 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f599753-8125-400e-b9dd-f94bee01fdf8" containerID="120e465d65e51d34e02339b22207b87e615c571b6d2f3559a86090ca3ef9a3c8" exitCode=0 Jan 21 16:00:30 crc kubenswrapper[4760]: I0121 16:00:30.773589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerDied","Data":"120e465d65e51d34e02339b22207b87e615c571b6d2f3559a86090ca3ef9a3c8"} Jan 21 16:00:31 crc kubenswrapper[4760]: I0121 16:00:31.656223 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-d6jcx" Jan 21 16:00:31 crc kubenswrapper[4760]: I0121 16:00:31.784580 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f599753-8125-400e-b9dd-f94bee01fdf8" containerID="1c159d650a23260ff4626e520334ec99282d16deed03d55e55555b79efb6b0a5" exitCode=0 Jan 21 16:00:31 crc kubenswrapper[4760]: I0121 16:00:31.784635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerDied","Data":"1c159d650a23260ff4626e520334ec99282d16deed03d55e55555b79efb6b0a5"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.794754 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"3d7af19f64b3fa6034fc2ada15abb197dfabd8e645d0c6643537a465e4fa9656"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"27bf52d1f12138c14476463f40f8d5462a220aef05dbee372c967ece14154560"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795365 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"cd6c3dc3838d6a495b58c4e6067c696c1ca8cea25ec7fc24512eabe3853d31ce"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795376 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"6d216f43e2beff3320444c5a7730fe2f4034d1e25ae0bd90fa6ba3bfbda217ff"} Jan 21 16:00:32 crc kubenswrapper[4760]: I0121 16:00:32.795388 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"720e223677a1d0e53b18c4bbc88811add6f55519ef5ed81fab7d00af9545264e"} Jan 21 16:00:33 crc kubenswrapper[4760]: I0121 16:00:33.807595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gsbq4" event={"ID":"5f599753-8125-400e-b9dd-f94bee01fdf8","Type":"ContainerStarted","Data":"2086924e557d799a35f48c425e59be3ee3f664ad659b2c300cafee34d4aee621"} Jan 21 16:00:33 crc kubenswrapper[4760]: I0121 16:00:33.807993 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:33 crc kubenswrapper[4760]: I0121 16:00:33.833370 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gsbq4" podStartSLOduration=5.9965336019999995 podStartE2EDuration="14.833348026s" podCreationTimestamp="2026-01-21 16:00:19 +0000 UTC" firstStartedPulling="2026-01-21 16:00:20.217284842 +0000 UTC m=+790.885054420" lastFinishedPulling="2026-01-21 16:00:29.054099266 +0000 UTC m=+799.721868844" observedRunningTime="2026-01-21 16:00:33.828359956 +0000 UTC m=+804.496129534" watchObservedRunningTime="2026-01-21 16:00:33.833348026 +0000 UTC m=+804.501117604" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.587208 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.588358 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.591828 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.592006 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.592306 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-scp79" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.603865 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.733430 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"openstack-operator-index-hw5l4\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.834768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"openstack-operator-index-hw5l4\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.857028 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"openstack-operator-index-hw5l4\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:34 crc kubenswrapper[4760]: I0121 16:00:34.905781 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.093078 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.249567 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.368177 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:35 crc kubenswrapper[4760]: I0121 16:00:35.823475 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerStarted","Data":"b927ac0dcf13712e9174b0458565663635615a25ef1a53013ebc1ee490aaccec"} Jan 21 16:00:37 crc kubenswrapper[4760]: I0121 16:00:37.968239 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.573280 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7qqml"] Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.574466 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.593181 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7qqml"] Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.640814 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829jh\" (UniqueName: \"kubernetes.io/projected/593c7623-4bb3-4d34-b7cf-b7bcaa5d292e-kube-api-access-829jh\") pod \"openstack-operator-index-7qqml\" (UID: \"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e\") " pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.743437 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-829jh\" (UniqueName: \"kubernetes.io/projected/593c7623-4bb3-4d34-b7cf-b7bcaa5d292e-kube-api-access-829jh\") pod \"openstack-operator-index-7qqml\" (UID: \"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e\") " pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.775537 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-829jh\" (UniqueName: \"kubernetes.io/projected/593c7623-4bb3-4d34-b7cf-b7bcaa5d292e-kube-api-access-829jh\") pod \"openstack-operator-index-7qqml\" (UID: \"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e\") " pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.843626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerStarted","Data":"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad"} Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.863309 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hw5l4" podStartSLOduration=2.492632346 podStartE2EDuration="4.86244821s" podCreationTimestamp="2026-01-21 16:00:34 +0000 UTC" firstStartedPulling="2026-01-21 16:00:35.376756798 +0000 UTC m=+806.044526376" lastFinishedPulling="2026-01-21 16:00:37.746572662 +0000 UTC m=+808.414342240" observedRunningTime="2026-01-21 16:00:38.857225793 +0000 UTC m=+809.524995391" watchObservedRunningTime="2026-01-21 16:00:38.86244821 +0000 UTC m=+809.530217788" Jan 21 16:00:38 crc kubenswrapper[4760]: I0121 16:00:38.895449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.170923 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7qqml"] Jan 21 16:00:39 crc kubenswrapper[4760]: W0121 16:00:39.174710 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod593c7623_4bb3_4d34_b7cf_b7bcaa5d292e.slice/crio-c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5 WatchSource:0}: Error finding container c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5: Status 404 returned error can't find the container with id c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5 Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.850732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7qqml" event={"ID":"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e","Type":"ContainerStarted","Data":"6708f4a60ef73821ca08fa592f642504f4cb7dfd86d9ec66921a43b417cbfb45"} Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.851136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7qqml" event={"ID":"593c7623-4bb3-4d34-b7cf-b7bcaa5d292e","Type":"ContainerStarted","Data":"c2d8c19bd8ad26d8e7991643832f99e523975e83da66e153e06fba033b759ee5"} Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.850886 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hw5l4" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" containerID="cri-o://e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" gracePeriod=2 Jan 21 16:00:39 crc kubenswrapper[4760]: I0121 16:00:39.876617 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7qqml" podStartSLOduration=1.807249762 podStartE2EDuration="1.876593202s" podCreationTimestamp="2026-01-21 16:00:38 +0000 UTC" firstStartedPulling="2026-01-21 16:00:39.179299134 +0000 UTC m=+809.847068712" lastFinishedPulling="2026-01-21 16:00:39.248642574 +0000 UTC m=+809.916412152" observedRunningTime="2026-01-21 16:00:39.866733625 +0000 UTC m=+810.534503213" watchObservedRunningTime="2026-01-21 16:00:39.876593202 +0000 UTC m=+810.544362770" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.111556 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-8cr5r" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.221094 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.266317 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") pod \"65c38ec6-8485-4ce5-aec9-566916541662\" (UID: \"65c38ec6-8485-4ce5-aec9-566916541662\") " Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.280290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj" (OuterVolumeSpecName: "kube-api-access-bsxwj") pod "65c38ec6-8485-4ce5-aec9-566916541662" (UID: "65c38ec6-8485-4ce5-aec9-566916541662"). InnerVolumeSpecName "kube-api-access-bsxwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.368408 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsxwj\" (UniqueName: \"kubernetes.io/projected/65c38ec6-8485-4ce5-aec9-566916541662-kube-api-access-bsxwj\") on node \"crc\" DevicePath \"\"" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.799090 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-skl79" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857355 4760 generic.go:334] "Generic (PLEG): container finished" podID="65c38ec6-8485-4ce5-aec9-566916541662" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" exitCode=0 Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857405 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hw5l4" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857439 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerDied","Data":"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad"} Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857469 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hw5l4" event={"ID":"65c38ec6-8485-4ce5-aec9-566916541662","Type":"ContainerDied","Data":"b927ac0dcf13712e9174b0458565663635615a25ef1a53013ebc1ee490aaccec"} Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.857487 4760 scope.go:117] "RemoveContainer" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.874128 4760 scope.go:117] "RemoveContainer" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" Jan 21 16:00:40 crc kubenswrapper[4760]: E0121 16:00:40.874632 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad\": container with ID starting with e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad not found: ID does not exist" containerID="e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.874665 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad"} err="failed to get container status \"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad\": rpc error: code = NotFound desc = could not find container \"e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad\": container with ID starting with e951cc6e5570fa37a21773f6cf47499e4e02896bb8c85fd83cd683e573ca99ad not found: ID does not exist" Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.886249 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:40 crc kubenswrapper[4760]: I0121 16:00:40.890044 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hw5l4"] Jan 21 16:00:41 crc kubenswrapper[4760]: I0121 16:00:41.637523 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c38ec6-8485-4ce5-aec9-566916541662" path="/var/lib/kubelet/pods/65c38ec6-8485-4ce5-aec9-566916541662/volumes" Jan 21 16:00:48 crc kubenswrapper[4760]: I0121 16:00:48.895583 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:48 crc kubenswrapper[4760]: I0121 16:00:48.896550 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:49 crc kubenswrapper[4760]: I0121 16:00:49.066292 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:49 crc kubenswrapper[4760]: I0121 16:00:49.092473 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7qqml" Jan 21 16:00:50 crc kubenswrapper[4760]: I0121 16:00:50.095515 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gsbq4" Jan 21 16:00:50 crc kubenswrapper[4760]: I0121 16:00:50.946833 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:00:50 crc kubenswrapper[4760]: I0121 16:00:50.947235 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.291469 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql"] Jan 21 16:00:57 crc kubenswrapper[4760]: E0121 16:00:57.291968 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.291982 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.292102 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c38ec6-8485-4ce5-aec9-566916541662" containerName="registry-server" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.292968 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.299261 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4cg7s" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.308786 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql"] Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.427978 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.428080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.428220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.529813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.529902 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.529971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.530501 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.530553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.552903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.615202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.797997 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql"] Jan 21 16:00:57 crc kubenswrapper[4760]: I0121 16:00:57.965217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerStarted","Data":"4109b84ec6caaef507db6d1850af94857c9d4ddbd63a4187cfddfb3765bd99d3"} Jan 21 16:01:00 crc kubenswrapper[4760]: I0121 16:01:00.990424 4760 generic.go:334] "Generic (PLEG): container finished" podID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerID="fe7534cd58ab0504bf7392dc16637339bb8ff8428044000e4429e1da75c8cd4a" exitCode=0 Jan 21 16:01:00 crc kubenswrapper[4760]: I0121 16:01:00.990547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"fe7534cd58ab0504bf7392dc16637339bb8ff8428044000e4429e1da75c8cd4a"} Jan 21 16:01:02 crc kubenswrapper[4760]: I0121 16:01:02.000383 4760 generic.go:334] "Generic (PLEG): container finished" podID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerID="f2c7a3fd233f2113337b35067b04aa650a3b984022c6cd7a00b698e1f37f6a93" exitCode=0 Jan 21 16:01:02 crc kubenswrapper[4760]: I0121 16:01:02.000493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"f2c7a3fd233f2113337b35067b04aa650a3b984022c6cd7a00b698e1f37f6a93"} Jan 21 16:01:03 crc kubenswrapper[4760]: I0121 16:01:03.012197 4760 generic.go:334] "Generic (PLEG): container finished" podID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerID="08036cd05b05994d3a67890f360703ccfe6dde13019f36b172e7475c5a8d79e8" exitCode=0 Jan 21 16:01:03 crc kubenswrapper[4760]: I0121 16:01:03.012257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"08036cd05b05994d3a67890f360703ccfe6dde13019f36b172e7475c5a8d79e8"} Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.280926 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.363490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") pod \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.363619 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") pod \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.363650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") pod \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\" (UID: \"ab7a2391-a0e7-4576-a91a-bf31978dc7ad\") " Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.364706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle" (OuterVolumeSpecName: "bundle") pod "ab7a2391-a0e7-4576-a91a-bf31978dc7ad" (UID: "ab7a2391-a0e7-4576-a91a-bf31978dc7ad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.369509 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8" (OuterVolumeSpecName: "kube-api-access-smss8") pod "ab7a2391-a0e7-4576-a91a-bf31978dc7ad" (UID: "ab7a2391-a0e7-4576-a91a-bf31978dc7ad"). InnerVolumeSpecName "kube-api-access-smss8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.377414 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util" (OuterVolumeSpecName: "util") pod "ab7a2391-a0e7-4576-a91a-bf31978dc7ad" (UID: "ab7a2391-a0e7-4576-a91a-bf31978dc7ad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.464920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smss8\" (UniqueName: \"kubernetes.io/projected/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-kube-api-access-smss8\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.464968 4760 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-util\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:04 crc kubenswrapper[4760]: I0121 16:01:04.464981 4760 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab7a2391-a0e7-4576-a91a-bf31978dc7ad-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:01:05 crc kubenswrapper[4760]: I0121 16:01:05.031900 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" event={"ID":"ab7a2391-a0e7-4576-a91a-bf31978dc7ad","Type":"ContainerDied","Data":"4109b84ec6caaef507db6d1850af94857c9d4ddbd63a4187cfddfb3765bd99d3"} Jan 21 16:01:05 crc kubenswrapper[4760]: I0121 16:01:05.031978 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4109b84ec6caaef507db6d1850af94857c9d4ddbd63a4187cfddfb3765bd99d3" Jan 21 16:01:05 crc kubenswrapper[4760]: I0121 16:01:05.031987 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.775247 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx"] Jan 21 16:01:09 crc kubenswrapper[4760]: E0121 16:01:09.776046 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="extract" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776062 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="extract" Jan 21 16:01:09 crc kubenswrapper[4760]: E0121 16:01:09.776071 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="util" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776077 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="util" Jan 21 16:01:09 crc kubenswrapper[4760]: E0121 16:01:09.776093 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="pull" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776098 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="pull" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776201 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7a2391-a0e7-4576-a91a-bf31978dc7ad" containerName="extract" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.776626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.779892 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zj8gl" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.803316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx"] Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.832902 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mzhs\" (UniqueName: \"kubernetes.io/projected/5ef28c93-e9fc-4d47-b280-5372e4c7aaf7-kube-api-access-8mzhs\") pod \"openstack-operator-controller-init-5bb58d564b-c5ghx\" (UID: \"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7\") " pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.934263 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mzhs\" (UniqueName: \"kubernetes.io/projected/5ef28c93-e9fc-4d47-b280-5372e4c7aaf7-kube-api-access-8mzhs\") pod \"openstack-operator-controller-init-5bb58d564b-c5ghx\" (UID: \"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7\") " pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:09 crc kubenswrapper[4760]: I0121 16:01:09.953178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mzhs\" (UniqueName: \"kubernetes.io/projected/5ef28c93-e9fc-4d47-b280-5372e4c7aaf7-kube-api-access-8mzhs\") pod \"openstack-operator-controller-init-5bb58d564b-c5ghx\" (UID: \"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7\") " pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:10 crc kubenswrapper[4760]: I0121 16:01:10.104575 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:10 crc kubenswrapper[4760]: I0121 16:01:10.406759 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx"] Jan 21 16:01:11 crc kubenswrapper[4760]: I0121 16:01:11.099981 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" event={"ID":"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7","Type":"ContainerStarted","Data":"7347297e812ae883a372903b2740a607731761e42f84fc1b707110e477b91087"} Jan 21 16:01:17 crc kubenswrapper[4760]: I0121 16:01:17.144218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" event={"ID":"5ef28c93-e9fc-4d47-b280-5372e4c7aaf7","Type":"ContainerStarted","Data":"70163fb48aa6ed53e51d51235fe29e6ec5847cf972e1095eb5654ac175dccb3b"} Jan 21 16:01:17 crc kubenswrapper[4760]: I0121 16:01:17.145232 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:17 crc kubenswrapper[4760]: I0121 16:01:17.180225 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" podStartSLOduration=1.731399975 podStartE2EDuration="8.180200908s" podCreationTimestamp="2026-01-21 16:01:09 +0000 UTC" firstStartedPulling="2026-01-21 16:01:10.435873612 +0000 UTC m=+841.103643200" lastFinishedPulling="2026-01-21 16:01:16.884674545 +0000 UTC m=+847.552444133" observedRunningTime="2026-01-21 16:01:17.178981374 +0000 UTC m=+847.846750952" watchObservedRunningTime="2026-01-21 16:01:17.180200908 +0000 UTC m=+847.847970486" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.946025 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.946363 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.946443 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.947171 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:01:20 crc kubenswrapper[4760]: I0121 16:01:20.947246 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b" gracePeriod=600 Jan 21 16:01:22 crc kubenswrapper[4760]: I0121 16:01:22.178307 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b" exitCode=0 Jan 21 16:01:22 crc kubenswrapper[4760]: I0121 16:01:22.178358 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b"} Jan 21 16:01:22 crc kubenswrapper[4760]: I0121 16:01:22.178709 4760 scope.go:117] "RemoveContainer" containerID="81da7fef60e0d834b22928a0a5dcf4687660734290ef3e62d24a36191f68fa2a" Jan 21 16:01:23 crc kubenswrapper[4760]: I0121 16:01:23.186635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f"} Jan 21 16:01:30 crc kubenswrapper[4760]: I0121 16:01:30.106965 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5bb58d564b-c5ghx" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.839139 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.840729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.843722 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wr6m5" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.844845 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.845805 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.847205 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4qlk6" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.858909 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.885513 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.886539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.889808 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4b7jq" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.891568 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.892347 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.893777 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-979rn" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.910934 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.921473 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.926278 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.936218 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.937061 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.939358 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rw45\" (UniqueName: \"kubernetes.io/projected/6026e9ac-64d0-4386-bbd8-f0ac19960a22-kube-api-access-7rw45\") pod \"cinder-operator-controller-manager-9b68f5989-zlfp7\" (UID: \"6026e9ac-64d0-4386-bbd8-f0ac19960a22\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.939416 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx7zf\" (UniqueName: \"kubernetes.io/projected/ebbdf3cf-f86a-471e-89d0-d2a43f8245f6-kube-api-access-cx7zf\") pod \"barbican-operator-controller-manager-7ddb5c749-nszmq\" (UID: \"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.940452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x94j2" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.954202 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.969003 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.972161 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.975002 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-t249x" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.984429 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk"] Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.985238 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.987483 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 16:01:56 crc kubenswrapper[4760]: I0121 16:01:56.987659 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7fwbn" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.034223 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk8cl\" (UniqueName: \"kubernetes.io/projected/bac59717-45dd-495a-8874-b4f29a8adc3f-kube-api-access-nk8cl\") pod \"glance-operator-controller-manager-c6994669c-z2bkt\" (UID: \"bac59717-45dd-495a-8874-b4f29a8adc3f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041484 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67wd\" (UniqueName: \"kubernetes.io/projected/97d1cdc7-8fc8-4e7b-b231-0cceadc61597-kube-api-access-k67wd\") pod \"heat-operator-controller-manager-594c8c9d5d-k92xb\" (UID: \"97d1cdc7-8fc8-4e7b-b231-0cceadc61597\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96tx\" (UniqueName: \"kubernetes.io/projected/8bcbe073-fa37-480d-a74a-af4c8d6a449b-kube-api-access-x96tx\") pod \"designate-operator-controller-manager-9f958b845-kc2f5\" (UID: \"8bcbe073-fa37-480d-a74a-af4c8d6a449b\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041702 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rw45\" (UniqueName: \"kubernetes.io/projected/6026e9ac-64d0-4386-bbd8-f0ac19960a22-kube-api-access-7rw45\") pod \"cinder-operator-controller-manager-9b68f5989-zlfp7\" (UID: \"6026e9ac-64d0-4386-bbd8-f0ac19960a22\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx7zf\" (UniqueName: \"kubernetes.io/projected/ebbdf3cf-f86a-471e-89d0-d2a43f8245f6-kube-api-access-cx7zf\") pod \"barbican-operator-controller-manager-7ddb5c749-nszmq\" (UID: \"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.041886 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6k6q\" (UniqueName: \"kubernetes.io/projected/1b969ec1-1858-44ff-92da-a071b9ff15ee-kube-api-access-t6k6q\") pod \"horizon-operator-controller-manager-77d5c5b54f-wp6f6\" (UID: \"1b969ec1-1858-44ff-92da-a071b9ff15ee\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.052600 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.066421 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.067312 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.072185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nxgp6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.079977 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx7zf\" (UniqueName: \"kubernetes.io/projected/ebbdf3cf-f86a-471e-89d0-d2a43f8245f6-kube-api-access-cx7zf\") pod \"barbican-operator-controller-manager-7ddb5c749-nszmq\" (UID: \"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.098577 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.099118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rw45\" (UniqueName: \"kubernetes.io/projected/6026e9ac-64d0-4386-bbd8-f0ac19960a22-kube-api-access-7rw45\") pod \"cinder-operator-controller-manager-9b68f5989-zlfp7\" (UID: \"6026e9ac-64d0-4386-bbd8-f0ac19960a22\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.120378 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.121539 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.123448 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vqf6w" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.140653 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.141999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk8cl\" (UniqueName: \"kubernetes.io/projected/bac59717-45dd-495a-8874-b4f29a8adc3f-kube-api-access-nk8cl\") pod \"glance-operator-controller-manager-c6994669c-z2bkt\" (UID: \"bac59717-45dd-495a-8874-b4f29a8adc3f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67wd\" (UniqueName: \"kubernetes.io/projected/97d1cdc7-8fc8-4e7b-b231-0cceadc61597-kube-api-access-k67wd\") pod \"heat-operator-controller-manager-594c8c9d5d-k92xb\" (UID: \"97d1cdc7-8fc8-4e7b-b231-0cceadc61597\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143255 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96tx\" (UniqueName: \"kubernetes.io/projected/8bcbe073-fa37-480d-a74a-af4c8d6a449b-kube-api-access-x96tx\") pod \"designate-operator-controller-manager-9f958b845-kc2f5\" (UID: \"8bcbe073-fa37-480d-a74a-af4c8d6a449b\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143346 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctdf\" (UniqueName: \"kubernetes.io/projected/a28cddfd-04c6-4860-a5eb-c341f2b25009-kube-api-access-7ctdf\") pod \"ironic-operator-controller-manager-78757b4889-z7mkd\" (UID: \"a28cddfd-04c6-4860-a5eb-c341f2b25009\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143383 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prdm8\" (UniqueName: \"kubernetes.io/projected/a441beba-fca9-47d4-bf5b-1533929ea421-kube-api-access-prdm8\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.143449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6k6q\" (UniqueName: \"kubernetes.io/projected/1b969ec1-1858-44ff-92da-a071b9ff15ee-kube-api-access-t6k6q\") pod \"horizon-operator-controller-manager-77d5c5b54f-wp6f6\" (UID: \"1b969ec1-1858-44ff-92da-a071b9ff15ee\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.145169 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-g88ld" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.159914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.168775 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.178618 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6k6q\" (UniqueName: \"kubernetes.io/projected/1b969ec1-1858-44ff-92da-a071b9ff15ee-kube-api-access-t6k6q\") pod \"horizon-operator-controller-manager-77d5c5b54f-wp6f6\" (UID: \"1b969ec1-1858-44ff-92da-a071b9ff15ee\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.180403 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.183659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67wd\" (UniqueName: \"kubernetes.io/projected/97d1cdc7-8fc8-4e7b-b231-0cceadc61597-kube-api-access-k67wd\") pod \"heat-operator-controller-manager-594c8c9d5d-k92xb\" (UID: \"97d1cdc7-8fc8-4e7b-b231-0cceadc61597\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.184584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96tx\" (UniqueName: \"kubernetes.io/projected/8bcbe073-fa37-480d-a74a-af4c8d6a449b-kube-api-access-x96tx\") pod \"designate-operator-controller-manager-9f958b845-kc2f5\" (UID: \"8bcbe073-fa37-480d-a74a-af4c8d6a449b\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.187608 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk8cl\" (UniqueName: \"kubernetes.io/projected/bac59717-45dd-495a-8874-b4f29a8adc3f-kube-api-access-nk8cl\") pod \"glance-operator-controller-manager-c6994669c-z2bkt\" (UID: \"bac59717-45dd-495a-8874-b4f29a8adc3f\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.194183 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.203153 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.204346 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.212801 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6nvsf" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.218418 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.219313 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.229106 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.230134 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.230980 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.253859 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jpccn" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.258717 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.258848 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nhns\" (UniqueName: \"kubernetes.io/projected/f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3-kube-api-access-8nhns\") pod \"keystone-operator-controller-manager-767fdc4f47-pp2ln\" (UID: \"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.258941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prdm8\" (UniqueName: \"kubernetes.io/projected/a441beba-fca9-47d4-bf5b-1533929ea421-kube-api-access-prdm8\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.259137 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8msk\" (UniqueName: \"kubernetes.io/projected/1530b88f-1192-4aa8-b9ba-82f23e37ea6a-kube-api-access-w8msk\") pod \"manila-operator-controller-manager-864f6b75bf-rjrtw\" (UID: \"1530b88f-1192-4aa8-b9ba-82f23e37ea6a\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.259494 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctdf\" (UniqueName: \"kubernetes.io/projected/a28cddfd-04c6-4860-a5eb-c341f2b25009-kube-api-access-7ctdf\") pod \"ironic-operator-controller-manager-78757b4889-z7mkd\" (UID: \"a28cddfd-04c6-4860-a5eb-c341f2b25009\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.264546 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.264687 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:57.764645639 +0000 UTC m=+888.432415217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.274987 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.304420 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prdm8\" (UniqueName: \"kubernetes.io/projected/a441beba-fca9-47d4-bf5b-1533929ea421-kube-api-access-prdm8\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.309233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctdf\" (UniqueName: \"kubernetes.io/projected/a28cddfd-04c6-4860-a5eb-c341f2b25009-kube-api-access-7ctdf\") pod \"ironic-operator-controller-manager-78757b4889-z7mkd\" (UID: \"a28cddfd-04c6-4860-a5eb-c341f2b25009\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.327484 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.331672 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.337663 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-xckkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.340487 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.356579 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2kgvt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360380 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8msk\" (UniqueName: \"kubernetes.io/projected/1530b88f-1192-4aa8-b9ba-82f23e37ea6a-kube-api-access-w8msk\") pod \"manila-operator-controller-manager-864f6b75bf-rjrtw\" (UID: \"1530b88f-1192-4aa8-b9ba-82f23e37ea6a\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27wx\" (UniqueName: \"kubernetes.io/projected/80ad016c-9145-4e38-90f1-515a1fcd0fc7-kube-api-access-m27wx\") pod \"mariadb-operator-controller-manager-c87fff755-chvdr\" (UID: \"80ad016c-9145-4e38-90f1-515a1fcd0fc7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360462 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j9tv\" (UniqueName: \"kubernetes.io/projected/2ef1c912-1599-4799-8f4c-1c9cb20045ba-kube-api-access-5j9tv\") pod \"neutron-operator-controller-manager-cb4666565-7vqlg\" (UID: \"2ef1c912-1599-4799-8f4c-1c9cb20045ba\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360511 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nhns\" (UniqueName: \"kubernetes.io/projected/f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3-kube-api-access-8nhns\") pod \"keystone-operator-controller-manager-767fdc4f47-pp2ln\" (UID: \"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.360990 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-xckkd"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.374788 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.375942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.382957 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-w8csp" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.385358 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8msk\" (UniqueName: \"kubernetes.io/projected/1530b88f-1192-4aa8-b9ba-82f23e37ea6a-kube-api-access-w8msk\") pod \"manila-operator-controller-manager-864f6b75bf-rjrtw\" (UID: \"1530b88f-1192-4aa8-b9ba-82f23e37ea6a\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.399667 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.404270 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nhns\" (UniqueName: \"kubernetes.io/projected/f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3-kube-api-access-8nhns\") pod \"keystone-operator-controller-manager-767fdc4f47-pp2ln\" (UID: \"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.438678 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.447597 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.451052 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-85n5m" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.457321 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m27wx\" (UniqueName: \"kubernetes.io/projected/80ad016c-9145-4e38-90f1-515a1fcd0fc7-kube-api-access-m27wx\") pod \"mariadb-operator-controller-manager-c87fff755-chvdr\" (UID: \"80ad016c-9145-4e38-90f1-515a1fcd0fc7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461140 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9tv\" (UniqueName: \"kubernetes.io/projected/2ef1c912-1599-4799-8f4c-1c9cb20045ba-kube-api-access-5j9tv\") pod \"neutron-operator-controller-manager-cb4666565-7vqlg\" (UID: \"2ef1c912-1599-4799-8f4c-1c9cb20045ba\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461172 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwl65\" (UniqueName: \"kubernetes.io/projected/7e819adc-151b-456f-b41f-5101b03ab7b2-kube-api-access-jwl65\") pod \"nova-operator-controller-manager-65849867d6-xckkd\" (UID: \"7e819adc-151b-456f-b41f-5101b03ab7b2\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.461240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtm24\" (UniqueName: \"kubernetes.io/projected/0252011a-4dac-4cad-94b3-39a6cf9bcd42-kube-api-access-wtm24\") pod \"octavia-operator-controller-manager-7fc9b76cf6-566bc\" (UID: \"0252011a-4dac-4cad-94b3-39a6cf9bcd42\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.465970 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.467316 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.498680 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.505504 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.506753 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.529470 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.531294 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.531342 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.531444 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.540886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.543528 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.543614 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9g9c8" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.543888 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8bm6c" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.544851 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-x22pn" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.562985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwl65\" (UniqueName: \"kubernetes.io/projected/7e819adc-151b-456f-b41f-5101b03ab7b2-kube-api-access-jwl65\") pod \"nova-operator-controller-manager-65849867d6-xckkd\" (UID: \"7e819adc-151b-456f-b41f-5101b03ab7b2\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563082 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr8qr\" (UniqueName: \"kubernetes.io/projected/daef61f2-122d-4414-b7df-24982387fa95-kube-api-access-kr8qr\") pod \"ovn-operator-controller-manager-55db956ddc-ffq4x\" (UID: \"daef61f2-122d-4414-b7df-24982387fa95\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nsrm\" (UniqueName: \"kubernetes.io/projected/75bcd345-56d6-4c12-9392-eea68c43dc30-kube-api-access-2nsrm\") pod \"placement-operator-controller-manager-686df47fcb-lqgfs\" (UID: \"75bcd345-56d6-4c12-9392-eea68c43dc30\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563178 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rldd\" (UniqueName: \"kubernetes.io/projected/28e62955-b747-4ca8-aa6b-d0678242596f-kube-api-access-6rldd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtm24\" (UniqueName: \"kubernetes.io/projected/0252011a-4dac-4cad-94b3-39a6cf9bcd42-kube-api-access-wtm24\") pod \"octavia-operator-controller-manager-7fc9b76cf6-566bc\" (UID: \"0252011a-4dac-4cad-94b3-39a6cf9bcd42\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.563255 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwg9x\" (UniqueName: \"kubernetes.io/projected/8d3c8a68-0896-4875-b6ff-d6f6fd2794b6-kube-api-access-jwg9x\") pod \"swift-operator-controller-manager-85dd56d4cc-49prq\" (UID: \"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.580763 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.580896 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m27wx\" (UniqueName: \"kubernetes.io/projected/80ad016c-9145-4e38-90f1-515a1fcd0fc7-kube-api-access-m27wx\") pod \"mariadb-operator-controller-manager-c87fff755-chvdr\" (UID: \"80ad016c-9145-4e38-90f1-515a1fcd0fc7\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.588482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9tv\" (UniqueName: \"kubernetes.io/projected/2ef1c912-1599-4799-8f4c-1c9cb20045ba-kube-api-access-5j9tv\") pod \"neutron-operator-controller-manager-cb4666565-7vqlg\" (UID: \"2ef1c912-1599-4799-8f4c-1c9cb20045ba\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.599378 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.601362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtm24\" (UniqueName: \"kubernetes.io/projected/0252011a-4dac-4cad-94b3-39a6cf9bcd42-kube-api-access-wtm24\") pod \"octavia-operator-controller-manager-7fc9b76cf6-566bc\" (UID: \"0252011a-4dac-4cad-94b3-39a6cf9bcd42\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.606508 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.610925 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qvqvm" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.617677 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwl65\" (UniqueName: \"kubernetes.io/projected/7e819adc-151b-456f-b41f-5101b03ab7b2-kube-api-access-jwl65\") pod \"nova-operator-controller-manager-65849867d6-xckkd\" (UID: \"7e819adc-151b-456f-b41f-5101b03ab7b2\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.623741 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.633486 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rldd\" (UniqueName: \"kubernetes.io/projected/28e62955-b747-4ca8-aa6b-d0678242596f-kube-api-access-6rldd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwg9x\" (UniqueName: \"kubernetes.io/projected/8d3c8a68-0896-4875-b6ff-d6f6fd2794b6-kube-api-access-jwg9x\") pod \"swift-operator-controller-manager-85dd56d4cc-49prq\" (UID: \"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmr9\" (UniqueName: \"kubernetes.io/projected/b511b419-e589-4783-a6a8-6d6fee8decde-kube-api-access-zdmr9\") pod \"telemetry-operator-controller-manager-5f8f495fcf-m7zb2\" (UID: \"b511b419-e589-4783-a6a8-6d6fee8decde\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr8qr\" (UniqueName: \"kubernetes.io/projected/daef61f2-122d-4414-b7df-24982387fa95-kube-api-access-kr8qr\") pod \"ovn-operator-controller-manager-55db956ddc-ffq4x\" (UID: \"daef61f2-122d-4414-b7df-24982387fa95\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.666400 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nsrm\" (UniqueName: \"kubernetes.io/projected/75bcd345-56d6-4c12-9392-eea68c43dc30-kube-api-access-2nsrm\") pod \"placement-operator-controller-manager-686df47fcb-lqgfs\" (UID: \"75bcd345-56d6-4c12-9392-eea68c43dc30\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.667867 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.667916 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.167901377 +0000 UTC m=+888.835670955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.692646 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.703972 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.705106 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwg9x\" (UniqueName: \"kubernetes.io/projected/8d3c8a68-0896-4875-b6ff-d6f6fd2794b6-kube-api-access-jwg9x\") pod \"swift-operator-controller-manager-85dd56d4cc-49prq\" (UID: \"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.715831 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nsrm\" (UniqueName: \"kubernetes.io/projected/75bcd345-56d6-4c12-9392-eea68c43dc30-kube-api-access-2nsrm\") pod \"placement-operator-controller-manager-686df47fcb-lqgfs\" (UID: \"75bcd345-56d6-4c12-9392-eea68c43dc30\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.717052 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr8qr\" (UniqueName: \"kubernetes.io/projected/daef61f2-122d-4414-b7df-24982387fa95-kube-api-access-kr8qr\") pod \"ovn-operator-controller-manager-55db956ddc-ffq4x\" (UID: \"daef61f2-122d-4414-b7df-24982387fa95\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.732873 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.734883 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rldd\" (UniqueName: \"kubernetes.io/projected/28e62955-b747-4ca8-aa6b-d0678242596f-kube-api-access-6rldd\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.749764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.766096 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.767723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmr9\" (UniqueName: \"kubernetes.io/projected/b511b419-e589-4783-a6a8-6d6fee8decde-kube-api-access-zdmr9\") pod \"telemetry-operator-controller-manager-5f8f495fcf-m7zb2\" (UID: \"b511b419-e589-4783-a6a8-6d6fee8decde\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.767794 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.771854 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.772097 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.77196146 +0000 UTC m=+889.439731078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.780280 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781055 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781078 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781554 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.781580 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.790389 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.791382 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.791930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.805953 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.806477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.806885 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.807064 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f2k9j" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.807195 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-wz7fl" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.807753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dh9lm" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.853778 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmr9\" (UniqueName: \"kubernetes.io/projected/b511b419-e589-4783-a6a8-6d6fee8decde-kube-api-access-zdmr9\") pod \"telemetry-operator-controller-manager-5f8f495fcf-m7zb2\" (UID: \"b511b419-e589-4783-a6a8-6d6fee8decde\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.866057 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.870885 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.874826 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875211 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpztk\" (UniqueName: \"kubernetes.io/projected/813b8c35-22e2-41a4-9523-a6cf3cd99ab2-kube-api-access-dpztk\") pod \"test-operator-controller-manager-7cd8bc9dbb-cfsr6\" (UID: \"813b8c35-22e2-41a4-9523-a6cf3cd99ab2\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875267 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6s6n\" (UniqueName: \"kubernetes.io/projected/4023c758-3567-4e32-97de-9501e117e965-kube-api-access-x6s6n\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875613 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.875705 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xf29\" (UniqueName: \"kubernetes.io/projected/d8bbdcea-a920-4fb4-b434-2323a28d0ea7-kube-api-access-2xf29\") pod \"watcher-operator-controller-manager-64cd966744-fkd2l\" (UID: \"d8bbdcea-a920-4fb4-b434-2323a28d0ea7\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.876194 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-grvm4" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.877653 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq"] Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.961950 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977305 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6s6n\" (UniqueName: \"kubernetes.io/projected/4023c758-3567-4e32-97de-9501e117e965-kube-api-access-x6s6n\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977375 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xf29\" (UniqueName: \"kubernetes.io/projected/d8bbdcea-a920-4fb4-b434-2323a28d0ea7-kube-api-access-2xf29\") pod \"watcher-operator-controller-manager-64cd966744-fkd2l\" (UID: \"d8bbdcea-a920-4fb4-b434-2323a28d0ea7\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977467 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfwwg\" (UniqueName: \"kubernetes.io/projected/a2806ede-c1d4-4571-8829-1b94cf7d1606-kube-api-access-kfwwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxwmq\" (UID: \"a2806ede-c1d4-4571-8829-1b94cf7d1606\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpztk\" (UniqueName: \"kubernetes.io/projected/813b8c35-22e2-41a4-9523-a6cf3cd99ab2-kube-api-access-dpztk\") pod \"test-operator-controller-manager-7cd8bc9dbb-cfsr6\" (UID: \"813b8c35-22e2-41a4-9523-a6cf3cd99ab2\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:57 crc kubenswrapper[4760]: I0121 16:01:57.977571 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.977749 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.977810 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.477792057 +0000 UTC m=+889.145561635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.978431 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:01:57 crc kubenswrapper[4760]: E0121 16:01:57.978471 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:58.478460724 +0000 UTC m=+889.146230312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.003420 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6s6n\" (UniqueName: \"kubernetes.io/projected/4023c758-3567-4e32-97de-9501e117e965-kube-api-access-x6s6n\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.003613 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpztk\" (UniqueName: \"kubernetes.io/projected/813b8c35-22e2-41a4-9523-a6cf3cd99ab2-kube-api-access-dpztk\") pod \"test-operator-controller-manager-7cd8bc9dbb-cfsr6\" (UID: \"813b8c35-22e2-41a4-9523-a6cf3cd99ab2\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.004530 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xf29\" (UniqueName: \"kubernetes.io/projected/d8bbdcea-a920-4fb4-b434-2323a28d0ea7-kube-api-access-2xf29\") pod \"watcher-operator-controller-manager-64cd966744-fkd2l\" (UID: \"d8bbdcea-a920-4fb4-b434-2323a28d0ea7\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.078580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfwwg\" (UniqueName: \"kubernetes.io/projected/a2806ede-c1d4-4571-8829-1b94cf7d1606-kube-api-access-kfwwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxwmq\" (UID: \"a2806ede-c1d4-4571-8829-1b94cf7d1606\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.103265 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfwwg\" (UniqueName: \"kubernetes.io/projected/a2806ede-c1d4-4571-8829-1b94cf7d1606-kube-api-access-kfwwg\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vxwmq\" (UID: \"a2806ede-c1d4-4571-8829-1b94cf7d1606\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.157682 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.180734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.181041 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.181124 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:01:59.181097665 +0000 UTC m=+889.848867243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.232288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.256077 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.270764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.497220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.497761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.497963 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.498080 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:59.498056782 +0000 UTC m=+890.165826360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.497970 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.503911 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:01:59.503871347 +0000 UTC m=+890.171640925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.687826 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.705285 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.741016 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.807212 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.807471 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: E0121 16:01:58.807828 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:00.807806658 +0000 UTC m=+891.475576236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.935560 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd"] Jan 21 16:01:58 crc kubenswrapper[4760]: W0121 16:01:58.949179 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda28cddfd_04c6_4860_a5eb_c341f2b25009.slice/crio-2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52 WatchSource:0}: Error finding container 2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52: Status 404 returned error can't find the container with id 2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52 Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.952888 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt"] Jan 21 16:01:58 crc kubenswrapper[4760]: I0121 16:01:58.960440 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.113624 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.148505 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.162821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.169881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.175462 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.214850 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.215179 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.215300 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:02:01.215269938 +0000 UTC m=+891.883039566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.441729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" event={"ID":"2ef1c912-1599-4799-8f4c-1c9cb20045ba","Type":"ContainerStarted","Data":"7fd6fc6aa255cf87bd2b2370367173531c6a9c36f9e55933348f3f02a2e854ff"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.443237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" event={"ID":"97d1cdc7-8fc8-4e7b-b231-0cceadc61597","Type":"ContainerStarted","Data":"71233fe4d586be0f7506d4619a481ef190c4788628d1275dea0425684637d77f"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.444445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" event={"ID":"6026e9ac-64d0-4386-bbd8-f0ac19960a22","Type":"ContainerStarted","Data":"31f685317e284b6d19dc46585bc33cea16621a1f148bb6e1a910ddf570b1cc84"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.445680 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" event={"ID":"1b969ec1-1858-44ff-92da-a071b9ff15ee","Type":"ContainerStarted","Data":"96966a57b56ac1faeacad0d4033dd5a5276f8bdc50cea1c2fff6721f102afecd"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.446885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" event={"ID":"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6","Type":"ContainerStarted","Data":"4e57b34f2343ac5533c7491996c71adb1c429e57357cbcbb2ee36c56d8e4c51b"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.447809 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" event={"ID":"a28cddfd-04c6-4860-a5eb-c341f2b25009","Type":"ContainerStarted","Data":"2ff465d295d33a2d6a11a977dc7aef5c9f6fb455ca42c5a8d349fcd9b3c89b52"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.449020 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" event={"ID":"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3","Type":"ContainerStarted","Data":"beed4c25acc1575e70ee8031a375cee1c13d63b446db2f32f304f5d43a20e2da"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.451477 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" event={"ID":"1530b88f-1192-4aa8-b9ba-82f23e37ea6a","Type":"ContainerStarted","Data":"a0b7712298eddb17e051621df0d58307fae815ca886734622d9cd6864e47d621"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.452746 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" event={"ID":"bac59717-45dd-495a-8874-b4f29a8adc3f","Type":"ContainerStarted","Data":"1eaa875138df09c08f38bbf5d69907b835cfab0686c41bebc5d9fba19e557e76"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.454198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" event={"ID":"8bcbe073-fa37-480d-a74a-af4c8d6a449b","Type":"ContainerStarted","Data":"7839d817c695f817d21b280d641587ca13bd4ae16fbaa543eb42d7fd5f634f81"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.458086 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" event={"ID":"daef61f2-122d-4414-b7df-24982387fa95","Type":"ContainerStarted","Data":"e760031610013d651bb81a899ac41265672364d057d34cd59b7655798c220862"} Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.488390 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.508239 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.520191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.520315 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520494 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520554 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:01.520534284 +0000 UTC m=+892.188303852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520896 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.520935 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:01.520925374 +0000 UTC m=+892.188694952 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.520964 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.530180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq"] Jan 21 16:01:59 crc kubenswrapper[4760]: W0121 16:01:59.539364 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2806ede_c1d4_4571_8829_1b94cf7d1606.slice/crio-06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1 WatchSource:0}: Error finding container 06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1: Status 404 returned error can't find the container with id 06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1 Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.541179 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-xckkd"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.548211 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2"] Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.569710 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l"] Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.577547 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xf29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-64cd966744-fkd2l_openstack-operators(d8bbdcea-a920-4fb4-b434-2323a28d0ea7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.580534 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podUID="d8bbdcea-a920-4fb4-b434-2323a28d0ea7" Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.583419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr"] Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.583713 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwl65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-xckkd_openstack-operators(7e819adc-151b-456f-b41f-5101b03ab7b2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.584980 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podUID="7e819adc-151b-456f-b41f-5101b03ab7b2" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.585181 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m27wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-chvdr_openstack-operators(80ad016c-9145-4e38-90f1-515a1fcd0fc7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.585160 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtm24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-566bc_openstack-operators(0252011a-4dac-4cad-94b3-39a6cf9bcd42): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.586279 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podUID="80ad016c-9145-4e38-90f1-515a1fcd0fc7" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.586578 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podUID="0252011a-4dac-4cad-94b3-39a6cf9bcd42" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.586978 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zdmr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-m7zb2_openstack-operators(b511b419-e589-4783-a6a8-6d6fee8decde): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.588163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podUID="b511b419-e589-4783-a6a8-6d6fee8decde" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.589545 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwg9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-49prq_openstack-operators(8d3c8a68-0896-4875-b6ff-d6f6fd2794b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 16:01:59 crc kubenswrapper[4760]: E0121 16:01:59.591015 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podUID="8d3c8a68-0896-4875-b6ff-d6f6fd2794b6" Jan 21 16:01:59 crc kubenswrapper[4760]: I0121 16:01:59.595186 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs"] Jan 21 16:01:59 crc kubenswrapper[4760]: W0121 16:01:59.600842 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75bcd345_56d6_4c12_9392_eea68c43dc30.slice/crio-08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701 WatchSource:0}: Error finding container 08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701: Status 404 returned error can't find the container with id 08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701 Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.479894 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" event={"ID":"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6","Type":"ContainerStarted","Data":"39ac760ccac1e2e9b8c779c4528a4bcf9bf7773751e121a45c68fba118dd2a60"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.481884 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podUID="8d3c8a68-0896-4875-b6ff-d6f6fd2794b6" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.492223 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" event={"ID":"d8bbdcea-a920-4fb4-b434-2323a28d0ea7","Type":"ContainerStarted","Data":"d912a98ff378a6c5711999f535874e713406720101b2563616789a45657f8b6d"} Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.496273 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" event={"ID":"a2806ede-c1d4-4571-8829-1b94cf7d1606","Type":"ContainerStarted","Data":"06184aa02a18330c9059a9a40fa1a3dec0549c2252c2d2373156bdc96f3be7e1"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.497377 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podUID="d8bbdcea-a920-4fb4-b434-2323a28d0ea7" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.502891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" event={"ID":"0252011a-4dac-4cad-94b3-39a6cf9bcd42","Type":"ContainerStarted","Data":"4947ee4fcafd52368e8f88c5d3ad732ed181d5b42c78957d0199cc8ca0dd3bf2"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.506056 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podUID="0252011a-4dac-4cad-94b3-39a6cf9bcd42" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.540810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" event={"ID":"75bcd345-56d6-4c12-9392-eea68c43dc30","Type":"ContainerStarted","Data":"08f25d9ab2421d1bbc9859ee1842a9c87fcc21262e5cbec30aee4ef06a458701"} Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.550165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" event={"ID":"b511b419-e589-4783-a6a8-6d6fee8decde","Type":"ContainerStarted","Data":"24b6e83736ad48390031fb1b881239d2311282407378e3025ec7bb50b95e98af"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.556941 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podUID="b511b419-e589-4783-a6a8-6d6fee8decde" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.562236 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" event={"ID":"813b8c35-22e2-41a4-9523-a6cf3cd99ab2","Type":"ContainerStarted","Data":"5e4070850d7c9d10460dd1ed95a3645b2f8392d4fb1327d54eeb36fd44326be7"} Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.582951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" event={"ID":"7e819adc-151b-456f-b41f-5101b03ab7b2","Type":"ContainerStarted","Data":"a7e3f2b0b7b1f029318be9df1b8dc81d23fd87e7f5762724bf8f723d2c5ca375"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.585753 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podUID="7e819adc-151b-456f-b41f-5101b03ab7b2" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.589616 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" event={"ID":"80ad016c-9145-4e38-90f1-515a1fcd0fc7","Type":"ContainerStarted","Data":"fe2fda7d67518aa710cccb877af00ae6475e448f01df6fffdf34dd93f7966bcb"} Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.592730 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podUID="80ad016c-9145-4e38-90f1-515a1fcd0fc7" Jan 21 16:02:00 crc kubenswrapper[4760]: I0121 16:02:00.855064 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.855246 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:00 crc kubenswrapper[4760]: E0121 16:02:00.855345 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:04.855307857 +0000 UTC m=+895.523077435 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: I0121 16:02:01.272640 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.273581 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.273650 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:02:05.273629296 +0000 UTC m=+895.941398864 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: I0121 16:02:01.577405 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:01 crc kubenswrapper[4760]: I0121 16:02:01.577515 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.577721 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.577779 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:05.577763392 +0000 UTC m=+896.245532970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.578493 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.578553 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:05.578533702 +0000 UTC m=+896.246303280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.603624 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podUID="b511b419-e589-4783-a6a8-6d6fee8decde" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605385 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podUID="80ad016c-9145-4e38-90f1-515a1fcd0fc7" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605446 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d687150a46d97eb382dcd8305a2a611943af74771debe1fa9cc13a21e51c69ad\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podUID="d8bbdcea-a920-4fb4-b434-2323a28d0ea7" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605480 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podUID="7e819adc-151b-456f-b41f-5101b03ab7b2" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605483 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podUID="8d3c8a68-0896-4875-b6ff-d6f6fd2794b6" Jan 21 16:02:01 crc kubenswrapper[4760]: E0121 16:02:01.605614 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podUID="0252011a-4dac-4cad-94b3-39a6cf9bcd42" Jan 21 16:02:04 crc kubenswrapper[4760]: I0121 16:02:04.944173 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:04 crc kubenswrapper[4760]: E0121 16:02:04.944371 4760 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:04 crc kubenswrapper[4760]: E0121 16:02:04.944706 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert podName:a441beba-fca9-47d4-bf5b-1533929ea421 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:12.944686186 +0000 UTC m=+903.612455764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert") pod "infra-operator-controller-manager-77c48c7859-7trxk" (UID: "a441beba-fca9-47d4-bf5b-1533929ea421") : secret "infra-operator-webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: I0121 16:02:05.350002 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.350242 4760 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.350390 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert podName:28e62955-b747-4ca8-aa6b-d0678242596f nodeName:}" failed. No retries permitted until 2026-01-21 16:02:13.350364108 +0000 UTC m=+904.018133686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" (UID: "28e62955-b747-4ca8-aa6b-d0678242596f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: I0121 16:02:05.654235 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:05 crc kubenswrapper[4760]: I0121 16:02:05.654487 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654522 4760 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654609 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:13.654579706 +0000 UTC m=+904.322349284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "metrics-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654621 4760 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 16:02:05 crc kubenswrapper[4760]: E0121 16:02:05.654659 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs podName:4023c758-3567-4e32-97de-9501e117e965 nodeName:}" failed. No retries permitted until 2026-01-21 16:02:13.654649518 +0000 UTC m=+904.322419096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs") pod "openstack-operator-controller-manager-867799c6f-wh9wg" (UID: "4023c758-3567-4e32-97de-9501e117e965") : secret "webhook-server-cert" not found Jan 21 16:02:11 crc kubenswrapper[4760]: E0121 16:02:11.946502 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 16:02:11 crc kubenswrapper[4760]: E0121 16:02:11.947281 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dpztk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-cfsr6_openstack-operators(813b8c35-22e2-41a4-9523-a6cf3cd99ab2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:11 crc kubenswrapper[4760]: E0121 16:02:11.948554 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" podUID="813b8c35-22e2-41a4-9523-a6cf3cd99ab2" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.703407 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" podUID="813b8c35-22e2-41a4-9523-a6cf3cd99ab2" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.739167 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.739702 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nsrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-lqgfs_openstack-operators(75bcd345-56d6-4c12-9392-eea68c43dc30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:12 crc kubenswrapper[4760]: E0121 16:02:12.741097 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" podUID="75bcd345-56d6-4c12-9392-eea68c43dc30" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.009495 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.033296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a441beba-fca9-47d4-bf5b-1533929ea421-cert\") pod \"infra-operator-controller-manager-77c48c7859-7trxk\" (UID: \"a441beba-fca9-47d4-bf5b-1533929ea421\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.255535 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7fwbn" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.267464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.417156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.421099 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28e62955-b747-4ca8-aa6b-d0678242596f-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt\" (UID: \"28e62955-b747-4ca8-aa6b-d0678242596f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.512729 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-x22pn" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.521425 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.655371 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.655666 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x96tx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-kc2f5_openstack-operators(8bcbe073-fa37-480d-a74a-af4c8d6a449b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.656911 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" podUID="8bcbe073-fa37-480d-a74a-af4c8d6a449b" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.725132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.725318 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.736820 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-metrics-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.740881 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4023c758-3567-4e32-97de-9501e117e965-webhook-certs\") pod \"openstack-operator-controller-manager-867799c6f-wh9wg\" (UID: \"4023c758-3567-4e32-97de-9501e117e965\") " pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.751210 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" podUID="75bcd345-56d6-4c12-9392-eea68c43dc30" Jan 21 16:02:13 crc kubenswrapper[4760]: E0121 16:02:13.751594 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" podUID="8bcbe073-fa37-480d-a74a-af4c8d6a449b" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.780903 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-f2k9j" Jan 21 16:02:13 crc kubenswrapper[4760]: I0121 16:02:13.788099 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.624985 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.625482 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5j9tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-7vqlg_openstack-operators(2ef1c912-1599-4799-8f4c-1c9cb20045ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.626629 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" podUID="2ef1c912-1599-4799-8f4c-1c9cb20045ba" Jan 21 16:02:14 crc kubenswrapper[4760]: E0121 16:02:14.739872 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" podUID="2ef1c912-1599-4799-8f4c-1c9cb20045ba" Jan 21 16:02:15 crc kubenswrapper[4760]: E0121 16:02:15.838237 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 16:02:15 crc kubenswrapper[4760]: E0121 16:02:15.839052 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cx7zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-nszmq_openstack-operators(ebbdf3cf-f86a-471e-89d0-d2a43f8245f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:15 crc kubenswrapper[4760]: E0121 16:02:15.842542 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" podUID="ebbdf3cf-f86a-471e-89d0-d2a43f8245f6" Jan 21 16:02:16 crc kubenswrapper[4760]: E0121 16:02:16.762192 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" podUID="ebbdf3cf-f86a-471e-89d0-d2a43f8245f6" Jan 21 16:02:17 crc kubenswrapper[4760]: E0121 16:02:17.957741 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 21 16:02:17 crc kubenswrapper[4760]: E0121 16:02:17.958562 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rw45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-zlfp7_openstack-operators(6026e9ac-64d0-4386-bbd8-f0ac19960a22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:17 crc kubenswrapper[4760]: E0121 16:02:17.959842 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" podUID="6026e9ac-64d0-4386-bbd8-f0ac19960a22" Jan 21 16:02:18 crc kubenswrapper[4760]: E0121 16:02:18.778062 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" podUID="6026e9ac-64d0-4386-bbd8-f0ac19960a22" Jan 21 16:02:19 crc kubenswrapper[4760]: E0121 16:02:19.875602 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 16:02:19 crc kubenswrapper[4760]: E0121 16:02:19.875861 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kr8qr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-ffq4x_openstack-operators(daef61f2-122d-4414-b7df-24982387fa95): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:19 crc kubenswrapper[4760]: E0121 16:02:19.877065 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" podUID="daef61f2-122d-4414-b7df-24982387fa95" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.794311 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" podUID="daef61f2-122d-4414-b7df-24982387fa95" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.814658 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.814931 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6k6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-wp6f6_openstack-operators(1b969ec1-1858-44ff-92da-a071b9ff15ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:20 crc kubenswrapper[4760]: E0121 16:02:20.818522 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" podUID="1b969ec1-1858-44ff-92da-a071b9ff15ee" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.388165 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.388382 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nhns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-pp2ln_openstack-operators(f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.390130 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" podUID="f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.798869 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" podUID="1b969ec1-1858-44ff-92da-a071b9ff15ee" Jan 21 16:02:21 crc kubenswrapper[4760]: E0121 16:02:21.799057 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" podUID="f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.305961 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.306195 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfwwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vxwmq_openstack-operators(a2806ede-c1d4-4571-8829-1b94cf7d1606): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.307437 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" podUID="a2806ede-c1d4-4571-8829-1b94cf7d1606" Jan 21 16:02:22 crc kubenswrapper[4760]: E0121 16:02:22.808605 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" podUID="a2806ede-c1d4-4571-8829-1b94cf7d1606" Jan 21 16:02:24 crc kubenswrapper[4760]: I0121 16:02:24.625389 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.592303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk"] Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.632952 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg"] Jan 21 16:02:26 crc kubenswrapper[4760]: W0121 16:02:26.690396 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4023c758_3567_4e32_97de_9501e117e965.slice/crio-fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60 WatchSource:0}: Error finding container fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60: Status 404 returned error can't find the container with id fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60 Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.726415 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt"] Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.857024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" event={"ID":"97d1cdc7-8fc8-4e7b-b231-0cceadc61597","Type":"ContainerStarted","Data":"ba333cbba82ad485b0fa570791de74bbf6a53dc5a53759394763b0b999c35aca"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.857531 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.866417 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" event={"ID":"4023c758-3567-4e32-97de-9501e117e965","Type":"ContainerStarted","Data":"fa9477fd35b6381aa8fe121f7bed169ce2bb4209952adbf8eb0f122f0c15fd60"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.877420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" event={"ID":"a28cddfd-04c6-4860-a5eb-c341f2b25009","Type":"ContainerStarted","Data":"218f00a23a13a8f3ed8ed2c6b61eba5e349b2d283036142981633b7cb20aa819"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.891844 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.904201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" event={"ID":"bac59717-45dd-495a-8874-b4f29a8adc3f","Type":"ContainerStarted","Data":"b119b673f2cc071e8d729c96a4bd111c6ec921621e069d89c9323ffa9588e460"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.904576 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.906007 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" event={"ID":"7e819adc-151b-456f-b41f-5101b03ab7b2","Type":"ContainerStarted","Data":"f9d03e109ef2ca94a5cda524912d8654c83143c274d5fbb3fd73cde556f4aefc"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.907504 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.909470 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" event={"ID":"80ad016c-9145-4e38-90f1-515a1fcd0fc7","Type":"ContainerStarted","Data":"12b19d25e6585bd5136d9bd2b90011afd22e8fdc0b0b86618e318e49de1dbee6"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.910599 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.923565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" event={"ID":"8d3c8a68-0896-4875-b6ff-d6f6fd2794b6","Type":"ContainerStarted","Data":"6f37b667bb8bdaf82ac87ea32ba7bd7ed2a86996d7df19f2a7025df6dde4986b"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.924514 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.929443 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" event={"ID":"a441beba-fca9-47d4-bf5b-1533929ea421","Type":"ContainerStarted","Data":"29c2b08e1ae346685dc5339554a8d3952700121767a71271d3a780175abbb0e6"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.931413 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" podStartSLOduration=7.99089934 podStartE2EDuration="30.93139221s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.958298119 +0000 UTC m=+889.626067697" lastFinishedPulling="2026-01-21 16:02:21.898790989 +0000 UTC m=+912.566560567" observedRunningTime="2026-01-21 16:02:26.930784014 +0000 UTC m=+917.598553592" watchObservedRunningTime="2026-01-21 16:02:26.93139221 +0000 UTC m=+917.599161788" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.932771 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" podStartSLOduration=8.179709592 podStartE2EDuration="30.932763786s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.145680143 +0000 UTC m=+889.813449721" lastFinishedPulling="2026-01-21 16:02:21.898734347 +0000 UTC m=+912.566503915" observedRunningTime="2026-01-21 16:02:26.907109373 +0000 UTC m=+917.574878951" watchObservedRunningTime="2026-01-21 16:02:26.932763786 +0000 UTC m=+917.600533364" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.938952 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" event={"ID":"28e62955-b747-4ca8-aa6b-d0678242596f","Type":"ContainerStarted","Data":"01f78b6e1c09de6462b40090313fefb514253fb619aaa0333525011e3f0eff17"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.944216 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" event={"ID":"1530b88f-1192-4aa8-b9ba-82f23e37ea6a","Type":"ContainerStarted","Data":"8b6464d2d1b0510cd044407c2445c827de722ee2ebd3a822884a5ffb4312cde9"} Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.945118 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:02:26 crc kubenswrapper[4760]: I0121 16:02:26.975759 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" podStartSLOduration=8.544784117 podStartE2EDuration="30.974942139s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.951538929 +0000 UTC m=+889.619308507" lastFinishedPulling="2026-01-21 16:02:21.381696951 +0000 UTC m=+912.049466529" observedRunningTime="2026-01-21 16:02:26.964973264 +0000 UTC m=+917.632742842" watchObservedRunningTime="2026-01-21 16:02:26.974942139 +0000 UTC m=+917.642711727" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.027269 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" podStartSLOduration=3.321766359 podStartE2EDuration="30.027244612s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.589365188 +0000 UTC m=+890.257134766" lastFinishedPulling="2026-01-21 16:02:26.294843441 +0000 UTC m=+916.962613019" observedRunningTime="2026-01-21 16:02:27.022794213 +0000 UTC m=+917.690563801" watchObservedRunningTime="2026-01-21 16:02:27.027244612 +0000 UTC m=+917.695014190" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.028812 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" podStartSLOduration=3.296727222 podStartE2EDuration="30.028801353s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.583474381 +0000 UTC m=+890.251243959" lastFinishedPulling="2026-01-21 16:02:26.315548512 +0000 UTC m=+916.983318090" observedRunningTime="2026-01-21 16:02:26.994805638 +0000 UTC m=+917.662575216" watchObservedRunningTime="2026-01-21 16:02:27.028801353 +0000 UTC m=+917.696570931" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.065474 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" podStartSLOduration=8.827863559 podStartE2EDuration="31.065445119s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.142665142 +0000 UTC m=+889.810434720" lastFinishedPulling="2026-01-21 16:02:21.380246702 +0000 UTC m=+912.048016280" observedRunningTime="2026-01-21 16:02:27.063391674 +0000 UTC m=+917.731161252" watchObservedRunningTime="2026-01-21 16:02:27.065445119 +0000 UTC m=+917.733214697" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.095339 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" podStartSLOduration=3.336663245 podStartE2EDuration="30.095309194s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.585065513 +0000 UTC m=+890.252835091" lastFinishedPulling="2026-01-21 16:02:26.343711462 +0000 UTC m=+917.011481040" observedRunningTime="2026-01-21 16:02:27.090246229 +0000 UTC m=+917.758015807" watchObservedRunningTime="2026-01-21 16:02:27.095309194 +0000 UTC m=+917.763078772" Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.973270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" event={"ID":"4023c758-3567-4e32-97de-9501e117e965","Type":"ContainerStarted","Data":"adcca865dcf1bdb467331c7af65a47d6864d41b4b990174ff58e19deb759c898"} Jan 21 16:02:27 crc kubenswrapper[4760]: I0121 16:02:27.974307 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:27.994651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" event={"ID":"0252011a-4dac-4cad-94b3-39a6cf9bcd42","Type":"ContainerStarted","Data":"2394b324991afbd2b6132048135fd7cabece217b66652aa40337a3484fed1633"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:27.995490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.004251 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" event={"ID":"75bcd345-56d6-4c12-9392-eea68c43dc30","Type":"ContainerStarted","Data":"3335f6cb420d10ab458592cddc36930dd9a5efd08dd61f7d509ff5e2854e2d58"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.005100 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.019184 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" event={"ID":"b511b419-e589-4783-a6a8-6d6fee8decde","Type":"ContainerStarted","Data":"7d28b9b974a547f573d013688adc3471ba184929d26a72f20007b089a0825f4a"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.020022 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.021461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" event={"ID":"d8bbdcea-a920-4fb4-b434-2323a28d0ea7","Type":"ContainerStarted","Data":"519c274f37d329ce0315bde79db77c54703b0ce0e4fdc8d8b855c30719840332"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.021895 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.023113 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" event={"ID":"813b8c35-22e2-41a4-9523-a6cf3cd99ab2","Type":"ContainerStarted","Data":"8cf64a557f901f05cef7506c7f3aba75ffbda2888b9c4e6b9be95547d88e9195"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.023538 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.025243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" event={"ID":"8bcbe073-fa37-480d-a74a-af4c8d6a449b","Type":"ContainerStarted","Data":"21fddf5bae7ea87ca2dbd66e20e3976488bfa349820d11c231ffebfd961d8991"} Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.025669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.109890 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" podStartSLOduration=31.109870197 podStartE2EDuration="31.109870197s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:02:28.042361879 +0000 UTC m=+918.710131487" watchObservedRunningTime="2026-01-21 16:02:28.109870197 +0000 UTC m=+918.777639775" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.111936 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" podStartSLOduration=4.366446174 podStartE2EDuration="31.111929322s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.609583807 +0000 UTC m=+890.277353385" lastFinishedPulling="2026-01-21 16:02:26.355066955 +0000 UTC m=+917.022836533" observedRunningTime="2026-01-21 16:02:28.109019924 +0000 UTC m=+918.776789502" watchObservedRunningTime="2026-01-21 16:02:28.111929322 +0000 UTC m=+918.779698900" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.137905 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" podStartSLOduration=3.707150032 podStartE2EDuration="32.137877972s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.805935918 +0000 UTC m=+889.473705496" lastFinishedPulling="2026-01-21 16:02:27.236663858 +0000 UTC m=+917.904433436" observedRunningTime="2026-01-21 16:02:28.133778453 +0000 UTC m=+918.801548031" watchObservedRunningTime="2026-01-21 16:02:28.137877972 +0000 UTC m=+918.805647550" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.161663 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" podStartSLOduration=4.430222112 podStartE2EDuration="31.161629645s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.584777346 +0000 UTC m=+890.252546924" lastFinishedPulling="2026-01-21 16:02:26.316184879 +0000 UTC m=+916.983954457" observedRunningTime="2026-01-21 16:02:28.158228754 +0000 UTC m=+918.825998332" watchObservedRunningTime="2026-01-21 16:02:28.161629645 +0000 UTC m=+918.829399223" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.188371 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" podStartSLOduration=4.449861234 podStartE2EDuration="31.188342516s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.577230385 +0000 UTC m=+890.244999963" lastFinishedPulling="2026-01-21 16:02:26.315711667 +0000 UTC m=+916.983481245" observedRunningTime="2026-01-21 16:02:28.183537118 +0000 UTC m=+918.851306716" watchObservedRunningTime="2026-01-21 16:02:28.188342516 +0000 UTC m=+918.856112104" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.215177 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" podStartSLOduration=3.660978417 podStartE2EDuration="31.21514928s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.505284387 +0000 UTC m=+890.173053965" lastFinishedPulling="2026-01-21 16:02:27.05945525 +0000 UTC m=+917.727224828" observedRunningTime="2026-01-21 16:02:28.21251865 +0000 UTC m=+918.880288218" watchObservedRunningTime="2026-01-21 16:02:28.21514928 +0000 UTC m=+918.882918858" Jan 21 16:02:28 crc kubenswrapper[4760]: I0121 16:02:28.241076 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" podStartSLOduration=4.535678719 podStartE2EDuration="31.241051259s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.585794383 +0000 UTC m=+890.253563961" lastFinishedPulling="2026-01-21 16:02:26.291166923 +0000 UTC m=+916.958936501" observedRunningTime="2026-01-21 16:02:28.238875441 +0000 UTC m=+918.906645029" watchObservedRunningTime="2026-01-21 16:02:28.241051259 +0000 UTC m=+918.908820837" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.040570 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" event={"ID":"2ef1c912-1599-4799-8f4c-1c9cb20045ba","Type":"ContainerStarted","Data":"ada39b6cd7b18701060fa1509d9ef046cecd9700c9761135d0a8d699eecc93bf"} Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.041997 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.107305 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" event={"ID":"ebbdf3cf-f86a-471e-89d0-d2a43f8245f6","Type":"ContainerStarted","Data":"30b1b5404bf9a82bd5fd99fe6178272423f145dcbe347b5b76115c6605b1566f"} Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.108256 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.136862 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" podStartSLOduration=3.071430141 podStartE2EDuration="33.136837354s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.154031435 +0000 UTC m=+889.821801023" lastFinishedPulling="2026-01-21 16:02:29.219438658 +0000 UTC m=+919.887208236" observedRunningTime="2026-01-21 16:02:30.133391412 +0000 UTC m=+920.801160990" watchObservedRunningTime="2026-01-21 16:02:30.136837354 +0000 UTC m=+920.804606932" Jan 21 16:02:30 crc kubenswrapper[4760]: I0121 16:02:30.159175 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" podStartSLOduration=3.731191693 podStartE2EDuration="34.159147218s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.792563772 +0000 UTC m=+889.460333350" lastFinishedPulling="2026-01-21 16:02:29.220519297 +0000 UTC m=+919.888288875" observedRunningTime="2026-01-21 16:02:30.1577064 +0000 UTC m=+920.825475978" watchObservedRunningTime="2026-01-21 16:02:30.159147218 +0000 UTC m=+920.826916806" Jan 21 16:02:33 crc kubenswrapper[4760]: I0121 16:02:33.800120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-867799c6f-wh9wg" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.106745 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" event={"ID":"a441beba-fca9-47d4-bf5b-1533929ea421","Type":"ContainerStarted","Data":"b6a0caf91679b3b7225efca69d767ea8f1b189a1a92253092d2e2f98fb5c56bc"} Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.106846 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.108423 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" event={"ID":"28e62955-b747-4ca8-aa6b-d0678242596f","Type":"ContainerStarted","Data":"932aa1d189e1603e2cef8113d56e6fc4d023b0b31d9f3b39d1a5d15ac0688bf7"} Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.108588 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.109848 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" event={"ID":"6026e9ac-64d0-4386-bbd8-f0ac19960a22","Type":"ContainerStarted","Data":"8ab9b75012f0948af04db86f6ab5970acf221f225fa9d30214c95463aaba6896"} Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.110048 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.129299 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" podStartSLOduration=31.892620332 podStartE2EDuration="38.129275191s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:02:26.680400217 +0000 UTC m=+917.348169795" lastFinishedPulling="2026-01-21 16:02:32.917055076 +0000 UTC m=+923.584824654" observedRunningTime="2026-01-21 16:02:34.120866197 +0000 UTC m=+924.788635785" watchObservedRunningTime="2026-01-21 16:02:34.129275191 +0000 UTC m=+924.797044769" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.154062 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" podStartSLOduration=4.027797519 podStartE2EDuration="38.15403702s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.792541181 +0000 UTC m=+889.460310759" lastFinishedPulling="2026-01-21 16:02:32.918780692 +0000 UTC m=+923.586550260" observedRunningTime="2026-01-21 16:02:34.153101105 +0000 UTC m=+924.820870683" watchObservedRunningTime="2026-01-21 16:02:34.15403702 +0000 UTC m=+924.821806598" Jan 21 16:02:34 crc kubenswrapper[4760]: I0121 16:02:34.189443 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" podStartSLOduration=31.043991202 podStartE2EDuration="37.189418182s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:02:26.772977362 +0000 UTC m=+917.440746940" lastFinishedPulling="2026-01-21 16:02:32.918404342 +0000 UTC m=+923.586173920" observedRunningTime="2026-01-21 16:02:34.184237465 +0000 UTC m=+924.852007043" watchObservedRunningTime="2026-01-21 16:02:34.189418182 +0000 UTC m=+924.857187750" Jan 21 16:02:36 crc kubenswrapper[4760]: I0121 16:02:36.124212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" event={"ID":"daef61f2-122d-4414-b7df-24982387fa95","Type":"ContainerStarted","Data":"629ced44933ffcab49a912ee947acf6194efcc7a937779bc9c042d94dcbecc80"} Jan 21 16:02:36 crc kubenswrapper[4760]: I0121 16:02:36.124805 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:02:36 crc kubenswrapper[4760]: I0121 16:02:36.145124 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" podStartSLOduration=2.531756712 podStartE2EDuration="39.145099192s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.14257321 +0000 UTC m=+889.810342788" lastFinishedPulling="2026-01-21 16:02:35.75591569 +0000 UTC m=+926.423685268" observedRunningTime="2026-01-21 16:02:36.140656434 +0000 UTC m=+926.808426012" watchObservedRunningTime="2026-01-21 16:02:36.145099192 +0000 UTC m=+926.812868770" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.138585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" event={"ID":"1b969ec1-1858-44ff-92da-a071b9ff15ee","Type":"ContainerStarted","Data":"34a491da07b5f970f98fcdff3e00fc5508959b42728fd5a8d524f1b4915fa570"} Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.139115 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.163208 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" podStartSLOduration=4.113114696 podStartE2EDuration="41.163184688s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:58.971237554 +0000 UTC m=+889.639007132" lastFinishedPulling="2026-01-21 16:02:36.021307546 +0000 UTC m=+926.689077124" observedRunningTime="2026-01-21 16:02:37.158853412 +0000 UTC m=+927.826622990" watchObservedRunningTime="2026-01-21 16:02:37.163184688 +0000 UTC m=+927.830954266" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.172759 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-nszmq" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.222860 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-kc2f5" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.238518 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-z2bkt" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.288168 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-k92xb" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.460703 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-z7mkd" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.546829 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-rjrtw" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.697953 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-chvdr" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.707639 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-7vqlg" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.736955 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-xckkd" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.754005 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-49prq" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.769316 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-566bc" Jan 21 16:02:37 crc kubenswrapper[4760]: I0121 16:02:37.964825 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-lqgfs" Jan 21 16:02:38 crc kubenswrapper[4760]: I0121 16:02:38.166282 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-m7zb2" Jan 21 16:02:38 crc kubenswrapper[4760]: I0121 16:02:38.235023 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-cfsr6" Jan 21 16:02:38 crc kubenswrapper[4760]: I0121 16:02:38.260754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-fkd2l" Jan 21 16:02:43 crc kubenswrapper[4760]: I0121 16:02:43.272951 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-7trxk" Jan 21 16:02:43 crc kubenswrapper[4760]: I0121 16:02:43.529407 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt" Jan 21 16:02:47 crc kubenswrapper[4760]: I0121 16:02:47.184021 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-zlfp7" Jan 21 16:02:47 crc kubenswrapper[4760]: I0121 16:02:47.335347 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-wp6f6" Jan 21 16:02:47 crc kubenswrapper[4760]: I0121 16:02:47.877749 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ffq4x" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.296536 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.299564 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.314940 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.481165 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.481573 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.481678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.582612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.582682 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.582740 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.583157 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.583157 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.606057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"community-operators-s77mf\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:02:59 crc kubenswrapper[4760]: I0121 16:02:59.677636 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:05 crc kubenswrapper[4760]: I0121 16:03:05.315868 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:05 crc kubenswrapper[4760]: W0121 16:03:05.320036 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf556754_b770_4425_b159_c2061788e5c0.slice/crio-7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c WatchSource:0}: Error finding container 7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c: Status 404 returned error can't find the container with id 7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c Jan 21 16:03:05 crc kubenswrapper[4760]: I0121 16:03:05.327931 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerStarted","Data":"7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c"} Jan 21 16:03:06 crc kubenswrapper[4760]: I0121 16:03:06.335615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" event={"ID":"f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3","Type":"ContainerStarted","Data":"da4decfddc923c901da273ad4e73ef439fa82a774c378fe66f6f7fbc4c3529c9"} Jan 21 16:03:06 crc kubenswrapper[4760]: I0121 16:03:06.336917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" event={"ID":"a2806ede-c1d4-4571-8829-1b94cf7d1606","Type":"ContainerStarted","Data":"5bc09bdbc1322161d733f645869e0fbff69713cadd1080b4ebbe046c0753e5c6"} Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.423594 4760 generic.go:334] "Generic (PLEG): container finished" podID="af556754-b770-4425-b159-c2061788e5c0" containerID="c7a348c930254f47a90661b086cf21eb829628b32ad9b700d97614d49c233075" exitCode=0 Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.423720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"c7a348c930254f47a90661b086cf21eb829628b32ad9b700d97614d49c233075"} Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.425542 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.464828 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vxwmq" podStartSLOduration=5.049653703 podStartE2EDuration="1m10.464803768s" podCreationTimestamp="2026-01-21 16:01:57 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.57143315 +0000 UTC m=+890.239202728" lastFinishedPulling="2026-01-21 16:03:04.986583215 +0000 UTC m=+955.654352793" observedRunningTime="2026-01-21 16:03:07.460098983 +0000 UTC m=+958.127868561" watchObservedRunningTime="2026-01-21 16:03:07.464803768 +0000 UTC m=+958.132573346" Jan 21 16:03:07 crc kubenswrapper[4760]: I0121 16:03:07.486626 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" podStartSLOduration=5.689392009 podStartE2EDuration="1m11.486590768s" podCreationTimestamp="2026-01-21 16:01:56 +0000 UTC" firstStartedPulling="2026-01-21 16:01:59.131501755 +0000 UTC m=+889.799271333" lastFinishedPulling="2026-01-21 16:03:04.928700514 +0000 UTC m=+955.596470092" observedRunningTime="2026-01-21 16:03:07.482764096 +0000 UTC m=+958.150533694" watchObservedRunningTime="2026-01-21 16:03:07.486590768 +0000 UTC m=+958.154360356" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.288213 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.289923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.306552 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.439695 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.439747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.439782 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541147 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.541984 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.542033 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.577868 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"redhat-operators-dnjxt\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:09 crc kubenswrapper[4760]: I0121 16:03:09.609535 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.111361 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.488774 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004"} Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.489353 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"51b333e9151ecd12416a9dbaa245f23dda9ad239de6e35cdb54408f2d5ef30bc"} Jan 21 16:03:15 crc kubenswrapper[4760]: I0121 16:03:15.493109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerStarted","Data":"c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda"} Jan 21 16:03:16 crc kubenswrapper[4760]: I0121 16:03:16.515837 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerID="84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004" exitCode=0 Jan 21 16:03:16 crc kubenswrapper[4760]: I0121 16:03:16.515917 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004"} Jan 21 16:03:17 crc kubenswrapper[4760]: I0121 16:03:17.524904 4760 generic.go:334] "Generic (PLEG): container finished" podID="af556754-b770-4425-b159-c2061788e5c0" containerID="c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda" exitCode=0 Jan 21 16:03:17 crc kubenswrapper[4760]: I0121 16:03:17.524992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda"} Jan 21 16:03:17 crc kubenswrapper[4760]: I0121 16:03:17.640067 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-pp2ln" Jan 21 16:03:18 crc kubenswrapper[4760]: I0121 16:03:18.535391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerStarted","Data":"2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6"} Jan 21 16:03:18 crc kubenswrapper[4760]: I0121 16:03:18.540156 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c"} Jan 21 16:03:18 crc kubenswrapper[4760]: I0121 16:03:18.561416 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s77mf" podStartSLOduration=8.735300981 podStartE2EDuration="19.561386682s" podCreationTimestamp="2026-01-21 16:02:59 +0000 UTC" firstStartedPulling="2026-01-21 16:03:07.427659189 +0000 UTC m=+958.095428777" lastFinishedPulling="2026-01-21 16:03:18.2537449 +0000 UTC m=+968.921514478" observedRunningTime="2026-01-21 16:03:18.555871805 +0000 UTC m=+969.223641403" watchObservedRunningTime="2026-01-21 16:03:18.561386682 +0000 UTC m=+969.229156260" Jan 21 16:03:19 crc kubenswrapper[4760]: I0121 16:03:19.678742 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:19 crc kubenswrapper[4760]: I0121 16:03:19.679472 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.772044 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.773999 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.785569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.886247 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-s77mf" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" probeResult="failure" output=< Jan 21 16:03:20 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:03:20 crc kubenswrapper[4760]: > Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.972603 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.972659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:20 crc kubenswrapper[4760]: I0121 16:03:20.972871 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.073972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074069 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.074548 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.183705 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"certified-operators-k2dj5\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:21 crc kubenswrapper[4760]: I0121 16:03:21.430755 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:22 crc kubenswrapper[4760]: I0121 16:03:22.576911 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerID="9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c" exitCode=0 Jan 21 16:03:22 crc kubenswrapper[4760]: I0121 16:03:22.576980 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c"} Jan 21 16:03:22 crc kubenswrapper[4760]: W0121 16:03:22.608988 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d267ab8_12dc_43a2_8199_7885783e8601.slice/crio-29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64 WatchSource:0}: Error finding container 29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64: Status 404 returned error can't find the container with id 29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64 Jan 21 16:03:22 crc kubenswrapper[4760]: I0121 16:03:22.616269 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.588146 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d267ab8-12dc-43a2-8199-7885783e8601" containerID="d6f701408593aa929077af793666fc048b04a8fc688afd2e498f5b02c2a8a245" exitCode=0 Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.588218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"d6f701408593aa929077af793666fc048b04a8fc688afd2e498f5b02c2a8a245"} Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.588478 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerStarted","Data":"29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64"} Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.590712 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerStarted","Data":"8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8"} Jan 21 16:03:23 crc kubenswrapper[4760]: I0121 16:03:23.648049 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dnjxt" podStartSLOduration=8.060283811 podStartE2EDuration="14.648020707s" podCreationTimestamp="2026-01-21 16:03:09 +0000 UTC" firstStartedPulling="2026-01-21 16:03:16.519303841 +0000 UTC m=+967.187073429" lastFinishedPulling="2026-01-21 16:03:23.107040747 +0000 UTC m=+973.774810325" observedRunningTime="2026-01-21 16:03:23.641917763 +0000 UTC m=+974.309687341" watchObservedRunningTime="2026-01-21 16:03:23.648020707 +0000 UTC m=+974.315790285" Jan 21 16:03:27 crc kubenswrapper[4760]: I0121 16:03:27.618276 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d267ab8-12dc-43a2-8199-7885783e8601" containerID="e6963d14d357f703576b0958af9bc058219608c4e7a2fbf733bfa1478e82dd20" exitCode=0 Jan 21 16:03:27 crc kubenswrapper[4760]: I0121 16:03:27.618347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"e6963d14d357f703576b0958af9bc058219608c4e7a2fbf733bfa1478e82dd20"} Jan 21 16:03:28 crc kubenswrapper[4760]: I0121 16:03:28.627442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerStarted","Data":"8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97"} Jan 21 16:03:28 crc kubenswrapper[4760]: I0121 16:03:28.654704 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k2dj5" podStartSLOduration=4.150370424 podStartE2EDuration="8.654679408s" podCreationTimestamp="2026-01-21 16:03:20 +0000 UTC" firstStartedPulling="2026-01-21 16:03:23.592313728 +0000 UTC m=+974.260083306" lastFinishedPulling="2026-01-21 16:03:28.096622712 +0000 UTC m=+978.764392290" observedRunningTime="2026-01-21 16:03:28.649200705 +0000 UTC m=+979.316970283" watchObservedRunningTime="2026-01-21 16:03:28.654679408 +0000 UTC m=+979.322448986" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.610102 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.610485 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.677283 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.722754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.737122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:29 crc kubenswrapper[4760]: I0121 16:03:29.774006 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.431590 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.431668 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.485517 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.955618 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:31 crc kubenswrapper[4760]: I0121 16:03:31.955923 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s77mf" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" containerID="cri-o://2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6" gracePeriod=2 Jan 21 16:03:32 crc kubenswrapper[4760]: I0121 16:03:32.152814 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:32 crc kubenswrapper[4760]: I0121 16:03:32.153102 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dnjxt" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" containerID="cri-o://8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8" gracePeriod=2 Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.663165 4760 generic.go:334] "Generic (PLEG): container finished" podID="af556754-b770-4425-b159-c2061788e5c0" containerID="2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6" exitCode=0 Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.663218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6"} Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.666947 4760 generic.go:334] "Generic (PLEG): container finished" podID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerID="8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8" exitCode=0 Jan 21 16:03:33 crc kubenswrapper[4760]: I0121 16:03:33.666988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8"} Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.285094 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.288398 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") pod \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368510 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") pod \"af556754-b770-4425-b159-c2061788e5c0\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368535 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") pod \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") pod \"af556754-b770-4425-b159-c2061788e5c0\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") pod \"af556754-b770-4425-b159-c2061788e5c0\" (UID: \"af556754-b770-4425-b159-c2061788e5c0\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.368699 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") pod \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\" (UID: \"9cf900ad-923c-4c9d-8999-aede0ef54f5a\") " Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.371810 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities" (OuterVolumeSpecName: "utilities") pod "af556754-b770-4425-b159-c2061788e5c0" (UID: "af556754-b770-4425-b159-c2061788e5c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.372096 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities" (OuterVolumeSpecName: "utilities") pod "9cf900ad-923c-4c9d-8999-aede0ef54f5a" (UID: "9cf900ad-923c-4c9d-8999-aede0ef54f5a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.377187 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz" (OuterVolumeSpecName: "kube-api-access-254dz") pod "9cf900ad-923c-4c9d-8999-aede0ef54f5a" (UID: "9cf900ad-923c-4c9d-8999-aede0ef54f5a"). InnerVolumeSpecName "kube-api-access-254dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.386497 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2" (OuterVolumeSpecName: "kube-api-access-85rd2") pod "af556754-b770-4425-b159-c2061788e5c0" (UID: "af556754-b770-4425-b159-c2061788e5c0"). InnerVolumeSpecName "kube-api-access-85rd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.432799 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af556754-b770-4425-b159-c2061788e5c0" (UID: "af556754-b770-4425-b159-c2061788e5c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470645 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-254dz\" (UniqueName: \"kubernetes.io/projected/9cf900ad-923c-4c9d-8999-aede0ef54f5a-kube-api-access-254dz\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470686 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470703 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470722 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af556754-b770-4425-b159-c2061788e5c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.470735 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85rd2\" (UniqueName: \"kubernetes.io/projected/af556754-b770-4425-b159-c2061788e5c0-kube-api-access-85rd2\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.523565 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cf900ad-923c-4c9d-8999-aede0ef54f5a" (UID: "9cf900ad-923c-4c9d-8999-aede0ef54f5a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.572312 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cf900ad-923c-4c9d-8999-aede0ef54f5a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576025 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576395 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576422 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576436 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576443 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576455 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576465 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576494 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576505 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="extract-utilities" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576516 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576522 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: E0121 16:03:34.576534 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576540 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="extract-content" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576664 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.576684 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="af556754-b770-4425-b159-c2061788e5c0" containerName="registry-server" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.577461 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.580697 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mv67k" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.580946 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.581056 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.581997 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.598987 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.646373 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.647903 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.651463 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.668511 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.673479 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.673612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.675064 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnjxt" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.675277 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnjxt" event={"ID":"9cf900ad-923c-4c9d-8999-aede0ef54f5a","Type":"ContainerDied","Data":"51b333e9151ecd12416a9dbaa245f23dda9ad239de6e35cdb54408f2d5ef30bc"} Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.675416 4760 scope.go:117] "RemoveContainer" containerID="8c561f1864de98da207ad20542f075d1b289f6e6f29c87cc5596d01b328581d8" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.677582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s77mf" event={"ID":"af556754-b770-4425-b159-c2061788e5c0","Type":"ContainerDied","Data":"7810d9e06c85f2b95f0525de568d77a28b55352d81dbb86ec7f8398e66635f2c"} Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.677677 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s77mf" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.696035 4760 scope.go:117] "RemoveContainer" containerID="9c77d2c09d67b53b4572a987b4d27383c31f64f146635c3e2bd8a1a384b2b24c" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.734743 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.737925 4760 scope.go:117] "RemoveContainer" containerID="84d1575a0370e7be78f471475ee17f4955aa08b5574407fae69deb79b11dd004" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.742596 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dnjxt"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.748889 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.755018 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s77mf"] Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.756075 4760 scope.go:117] "RemoveContainer" containerID="2f9460a97642daf2121583fc5e64b9444552bbcd87b6a7035f3efce78a796fb6" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.772643 4760 scope.go:117] "RemoveContainer" containerID="c8f9c9f675d6ca5d70e3f533791c38a2c83037ae4ec72ce0345fb28999f53eda" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.775471 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.775564 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.775613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.776203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.776248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.777413 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.793241 4760 scope.go:117] "RemoveContainer" containerID="c7a348c930254f47a90661b086cf21eb829628b32ad9b700d97614d49c233075" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.799595 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"dnsmasq-dns-675f4bcbfc-mxzx4\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.877543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.877662 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.877729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.878838 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.878995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.896296 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"dnsmasq-dns-78dd6ddcc-p7x9p\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.898933 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:03:34 crc kubenswrapper[4760]: I0121 16:03:34.963759 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.254189 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.345970 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:35 crc kubenswrapper[4760]: W0121 16:03:35.350839 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod311ca2cc_2871_4326_a66d_7ebacf5d0739.slice/crio-4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540 WatchSource:0}: Error finding container 4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540: Status 404 returned error can't find the container with id 4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540 Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.630776 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf900ad-923c-4c9d-8999-aede0ef54f5a" path="/var/lib/kubelet/pods/9cf900ad-923c-4c9d-8999-aede0ef54f5a/volumes" Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.631732 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af556754-b770-4425-b159-c2061788e5c0" path="/var/lib/kubelet/pods/af556754-b770-4425-b159-c2061788e5c0/volumes" Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.685654 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" event={"ID":"311ca2cc-2871-4326-a66d-7ebacf5d0739","Type":"ContainerStarted","Data":"4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540"} Jan 21 16:03:35 crc kubenswrapper[4760]: I0121 16:03:35.687520 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" event={"ID":"8fead1d9-342f-49d5-bf14-86767afa754f","Type":"ContainerStarted","Data":"30da348f69a4121551a769cdf657e2d763a496fe50aee4af7b0fce21e3a41abd"} Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.397128 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.432722 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.433991 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.445784 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.517306 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.517639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.517712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.618966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619071 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.619989 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.653248 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"dnsmasq-dns-666b6646f7-fm5r8\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.719972 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.747114 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.748670 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.752149 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.765843 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.829032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.829534 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.829639 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.931449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.931560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.931612 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.932862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.933445 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:37 crc kubenswrapper[4760]: I0121 16:03:37.952237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"dnsmasq-dns-57d769cc4f-k6gph\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.154084 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.281548 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.588357 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:03:38 crc kubenswrapper[4760]: W0121 16:03:38.594142 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1afcd4c8_23d6_4e7e_9665_1f8ed0b5b3ef.slice/crio-e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3 WatchSource:0}: Error finding container e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3: Status 404 returned error can't find the container with id e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3 Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.726768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" event={"ID":"bd396dae-aefd-4646-8418-cd57cb44d7b7","Type":"ContainerStarted","Data":"8dd02b336930ac5d7bceca40dc5e196ba41d6ce619474ebbe3c08a39a088c0d0"} Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.728159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" event={"ID":"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef","Type":"ContainerStarted","Data":"e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3"} Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.827164 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.830496 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.832851 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833149 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833261 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-289fm" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833435 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.833651 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.834027 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.834210 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.838261 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.900798 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.902390 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912347 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912671 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912797 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dh775" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.912946 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.913410 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.913599 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.915675 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.933207 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958271 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958371 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958450 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958479 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958506 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958529 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958576 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958600 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958629 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958654 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958699 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.958895 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960582 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960617 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960692 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960728 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.960750 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.991481 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:03:38 crc kubenswrapper[4760]: I0121 16:03:38.993140 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.006693 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062180 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062278 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062301 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062335 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062392 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062435 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062452 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062469 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062482 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062546 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062563 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062579 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062636 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062664 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062682 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062703 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.062719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.063158 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.063476 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.066644 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.067806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.070637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.070658 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.070948 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.071082 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.071116 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.071183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.072626 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.073515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.076257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.079962 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.080619 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.082435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.088701 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.089553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.090045 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.105947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.111733 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.136228 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.139533 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.165994 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.166155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.166208 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.169085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.169142 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.176629 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.205319 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"redhat-marketplace-bxqfs\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.283521 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.340758 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.474742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.704469 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:03:39 crc kubenswrapper[4760]: I0121 16:03:39.783192 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:03:39 crc kubenswrapper[4760]: W0121 16:03:39.796208 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88b3abe_b642_4d65_b822_5b62d6095959.slice/crio-f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3 WatchSource:0}: Error finding container f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3: Status 404 returned error can't find the container with id f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3 Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.190262 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.214231 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.226266 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.230694 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fnn4k" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.230981 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.232066 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.233080 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.240129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.240749 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417354 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-kolla-config\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417396 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc25d\" (UniqueName: \"kubernetes.io/projected/29bd8985-5f22-46e9-9868-607bf9be273e-kube-api-access-wc25d\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417416 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-default\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417439 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417466 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.417526 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519860 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-kolla-config\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519913 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc25d\" (UniqueName: \"kubernetes.io/projected/29bd8985-5f22-46e9-9868-607bf9be273e-kube-api-access-wc25d\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-default\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.519967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.520037 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.520058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.520114 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.521459 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-kolla-config\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.521748 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.523678 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.523968 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.536902 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29bd8985-5f22-46e9-9868-607bf9be273e-config-data-default\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.537376 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.549301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29bd8985-5f22-46e9-9868-607bf9be273e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.561424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.565938 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc25d\" (UniqueName: \"kubernetes.io/projected/29bd8985-5f22-46e9-9868-607bf9be273e-kube-api-access-wc25d\") pod \"openstack-galera-0\" (UID: \"29bd8985-5f22-46e9-9868-607bf9be273e\") " pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.586941 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.769711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerStarted","Data":"b14f2e51d8d5e82e725321f229e21a18f7e617652a935f60dfcebde41c79dd68"} Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.774586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerStarted","Data":"4b975f80ef2072e1178f421772e768558fc33ff22a27edb1b1fe54f8108c0f70"} Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.793514 4760 generic.go:334] "Generic (PLEG): container finished" podID="b88b3abe-b642-4d65-b822-5b62d6095959" containerID="916d38b69e34b3c7bfd101d0e1dafae28fb9daba7502d1230c6e3a8a77f5e9b8" exitCode=0 Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.793580 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"916d38b69e34b3c7bfd101d0e1dafae28fb9daba7502d1230c6e3a8a77f5e9b8"} Jan 21 16:03:40 crc kubenswrapper[4760]: I0121 16:03:40.793618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerStarted","Data":"f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3"} Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.133102 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.535682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.636906 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.639253 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.643846 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.644287 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rw2bg" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.648320 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.650715 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.657904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.657956 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzwr\" (UniqueName: \"kubernetes.io/projected/06184570-059b-4132-a5b6-365e3e12e383-kube-api-access-ffzwr\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.658033 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-kolla-config\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.658231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.658290 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-config-data\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.660840 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.668557 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.674292 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.674513 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dpgqj" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.674930 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.675145 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.685851 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764090 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764154 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzwr\" (UniqueName: \"kubernetes.io/projected/06184570-059b-4132-a5b6-365e3e12e383-kube-api-access-ffzwr\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764225 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-kolla-config\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.764267 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-config-data\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.765117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-config-data\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.770007 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06184570-059b-4132-a5b6-365e3e12e383-kolla-config\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.777071 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-memcached-tls-certs\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.782864 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06184570-059b-4132-a5b6-365e3e12e383-combined-ca-bundle\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.797614 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzwr\" (UniqueName: \"kubernetes.io/projected/06184570-059b-4132-a5b6-365e3e12e383-kube-api-access-ffzwr\") pod \"memcached-0\" (UID: \"06184570-059b-4132-a5b6-365e3e12e383\") " pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.819556 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerStarted","Data":"42abd2fb8f4aed1b64cdb14e3f4d342aceaec98a5e60126d935e5edc2d1d3a16"} Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865582 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865688 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865714 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865736 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865757 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94cb\" (UniqueName: \"kubernetes.io/projected/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kube-api-access-b94cb\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.865847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967530 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967810 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967836 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.967859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94cb\" (UniqueName: \"kubernetes.io/projected/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kube-api-access-b94cb\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.968552 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.968584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.969347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.969672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d0612ab6-de5e-4f61-9e1c-97f8237c996c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.970152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0612ab6-de5e-4f61-9e1c-97f8237c996c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.972020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.975392 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.981240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0612ab6-de5e-4f61-9e1c-97f8237c996c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.988968 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94cb\" (UniqueName: \"kubernetes.io/projected/d0612ab6-de5e-4f61-9e1c-97f8237c996c-kube-api-access-b94cb\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:41 crc kubenswrapper[4760]: I0121 16:03:41.997283 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d0612ab6-de5e-4f61-9e1c-97f8237c996c\") " pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:42 crc kubenswrapper[4760]: I0121 16:03:42.014780 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.200172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: W0121 16:03:43.255031 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06184570_059b_4132_a5b6_365e3e12e383.slice/crio-a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1 WatchSource:0}: Error finding container a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1: Status 404 returned error can't find the container with id a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1 Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.350949 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.586729 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.587857 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.606774 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cjwp5" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.608498 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.624131 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"kube-state-metrics-0\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.725726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"kube-state-metrics-0\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.748862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"kube-state-metrics-0\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " pod="openstack/kube-state-metrics-0" Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.896168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"06184570-059b-4132-a5b6-365e3e12e383","Type":"ContainerStarted","Data":"a36063529cf186687508508351932c5ad08eabf14a489a31fe0af44302f251d1"} Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.903494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerStarted","Data":"b2212c23c1f27a655d1a71ae15555312f2f30ed36d90282811db4255c05fa53d"} Jan 21 16:03:43 crc kubenswrapper[4760]: I0121 16:03:43.924804 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.669939 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:03:44 crc kubenswrapper[4760]: W0121 16:03:44.693157 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd82db1d_e956_477b_99af_024e7e0a6170.slice/crio-c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358 WatchSource:0}: Error finding container c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358: Status 404 returned error can't find the container with id c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358 Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.755446 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.755710 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k2dj5" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" containerID="cri-o://8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" gracePeriod=2 Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.912491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerStarted","Data":"c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358"} Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.916869 4760 generic.go:334] "Generic (PLEG): container finished" podID="3d267ab8-12dc-43a2-8199-7885783e8601" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" exitCode=0 Jan 21 16:03:44 crc kubenswrapper[4760]: I0121 16:03:44.916910 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97"} Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.373697 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltr79"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.376060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.393448 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.393502 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-hxzg6" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.393697 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.493544 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.493628 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.493810 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-scripts\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-combined-ca-bundle\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494122 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-log-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-ovn-controller-tls-certs\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.494287 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqsw\" (UniqueName: \"kubernetes.io/projected/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-kube-api-access-vhqsw\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.586010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.596556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-ovn-controller-tls-certs\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.596918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqsw\" (UniqueName: \"kubernetes.io/projected/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-kube-api-access-vhqsw\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597041 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597155 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597337 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-scripts\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597491 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-combined-ca-bundle\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.597603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-log-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.598462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.598564 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-run-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.598617 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-var-log-ovn\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.616261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-combined-ca-bundle\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.619703 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-ovn-controller-tls-certs\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.669178 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqsw\" (UniqueName: \"kubernetes.io/projected/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-kube-api-access-vhqsw\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.712727 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jfrjn"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.714492 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.723445 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jfrjn"] Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902318 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-etc-ovs\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902391 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-lib\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a0315f5-89b8-4589-b088-2ea2bb15e078-scripts\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-log\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902585 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgpj\" (UniqueName: \"kubernetes.io/projected/1a0315f5-89b8-4589-b088-2ea2bb15e078-kube-api-access-7kgpj\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:49 crc kubenswrapper[4760]: I0121 16:03:49.902657 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-run\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004306 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a0315f5-89b8-4589-b088-2ea2bb15e078-scripts\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004463 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-log\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgpj\" (UniqueName: \"kubernetes.io/projected/1a0315f5-89b8-4589-b088-2ea2bb15e078-kube-api-access-7kgpj\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-run\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-etc-ovs\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.004704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-lib\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.005041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-lib\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.005127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-run\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.005881 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-var-log\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.006078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1a0315f5-89b8-4589-b088-2ea2bb15e078-etc-ovs\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.009046 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a0315f5-89b8-4589-b088-2ea2bb15e078-scripts\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.025832 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgpj\" (UniqueName: \"kubernetes.io/projected/1a0315f5-89b8-4589-b088-2ea2bb15e078-kube-api-access-7kgpj\") pod \"ovn-controller-ovs-jfrjn\" (UID: \"1a0315f5-89b8-4589-b088-2ea2bb15e078\") " pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.049540 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.056215 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.058867 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.066011 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.066251 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.066398 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-52jz4" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.068031 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.068407 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.068764 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263798 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q7zb\" (UniqueName: \"kubernetes.io/projected/47448c69-3198-48d8-8623-9a339a934aca-kube-api-access-4q7zb\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263889 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.263936 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47448c69-3198-48d8-8623-9a339a934aca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-config\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264253 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.264280 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365620 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-config\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365703 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365737 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365808 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q7zb\" (UniqueName: \"kubernetes.io/projected/47448c69-3198-48d8-8623-9a339a934aca-kube-api-access-4q7zb\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365881 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.365962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47448c69-3198-48d8-8623-9a339a934aca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.366485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/47448c69-3198-48d8-8623-9a339a934aca-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.367011 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-config\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.368214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47448c69-3198-48d8-8623-9a339a934aca-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.368413 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.373152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.379694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.394411 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q7zb\" (UniqueName: \"kubernetes.io/projected/47448c69-3198-48d8-8623-9a339a934aca-kube-api-access-4q7zb\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.396388 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/47448c69-3198-48d8-8623-9a339a934aca-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.432291 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"47448c69-3198-48d8-8623-9a339a934aca\") " pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.681638 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.916847 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c17cd40e-6e7b-4c1e-9ca8-e6edc1248330-scripts\") pod \"ovn-controller-ltr79\" (UID: \"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330\") " pod="openstack/ovn-controller-ltr79" Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.946720 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:03:50 crc kubenswrapper[4760]: I0121 16:03:50.946823 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.210464 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79" Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.435279 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.437968 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.438930 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:03:51 crc kubenswrapper[4760]: E0121 16:03:51.438975 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-k2dj5" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.440533 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.442050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445455 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445745 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445812 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.445945 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sbhlk" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.461193 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593578 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593610 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwh7p\" (UniqueName: \"kubernetes.io/projected/9ab8d081-832d-4e4c-92e6-94a97545613c-kube-api-access-kwh7p\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593781 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.593832 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-config\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.695950 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696055 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-config\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwh7p\" (UniqueName: \"kubernetes.io/projected/9ab8d081-832d-4e4c-92e6-94a97545613c-kube-api-access-kwh7p\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696236 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.696708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.697256 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.698177 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.699018 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ab8d081-832d-4e4c-92e6-94a97545613c-config\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.700611 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.702094 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.742132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.742951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwh7p\" (UniqueName: \"kubernetes.io/projected/9ab8d081-832d-4e4c-92e6-94a97545613c-kube-api-access-kwh7p\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.743773 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ab8d081-832d-4e4c-92e6-94a97545613c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.746159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"9ab8d081-832d-4e4c-92e6-94a97545613c\") " pod="openstack/ovsdbserver-sb-0" Jan 21 16:03:51 crc kubenswrapper[4760]: I0121 16:03:51.790470 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.432639 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.433627 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.434028 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:01 crc kubenswrapper[4760]: E0121 16:04:01.434066 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-k2dj5" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.886037 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.921957 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") pod \"3d267ab8-12dc-43a2-8199-7885783e8601\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.922075 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") pod \"3d267ab8-12dc-43a2-8199-7885783e8601\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.922132 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") pod \"3d267ab8-12dc-43a2-8199-7885783e8601\" (UID: \"3d267ab8-12dc-43a2-8199-7885783e8601\") " Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.923275 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities" (OuterVolumeSpecName: "utilities") pod "3d267ab8-12dc-43a2-8199-7885783e8601" (UID: "3d267ab8-12dc-43a2-8199-7885783e8601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.936503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg" (OuterVolumeSpecName: "kube-api-access-cltcg") pod "3d267ab8-12dc-43a2-8199-7885783e8601" (UID: "3d267ab8-12dc-43a2-8199-7885783e8601"). InnerVolumeSpecName "kube-api-access-cltcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:01 crc kubenswrapper[4760]: I0121 16:04:01.976599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d267ab8-12dc-43a2-8199-7885783e8601" (UID: "3d267ab8-12dc-43a2-8199-7885783e8601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.023726 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.023762 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cltcg\" (UniqueName: \"kubernetes.io/projected/3d267ab8-12dc-43a2-8199-7885783e8601-kube-api-access-cltcg\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.023772 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d267ab8-12dc-43a2-8199-7885783e8601-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.262338 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k2dj5" event={"ID":"3d267ab8-12dc-43a2-8199-7885783e8601","Type":"ContainerDied","Data":"29fd42d9a78f60e891b57dd6ff1b653af0f8d67457faa248ab9a8ddd3ad04f64"} Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.262383 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k2dj5" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.262396 4760 scope.go:117] "RemoveContainer" containerID="8ed518aaf70cb914b959c0683dc371b0b683a4edffe8894022328b3ad6861f97" Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.296679 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:04:02 crc kubenswrapper[4760]: I0121 16:04:02.304036 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k2dj5"] Jan 21 16:04:03 crc kubenswrapper[4760]: I0121 16:04:03.633662 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" path="/var/lib/kubelet/pods/3d267ab8-12dc-43a2-8199-7885783e8601/volumes" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.295806 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.296634 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wc25d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(29bd8985-5f22-46e9-9868-607bf9be273e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.297853 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.365084 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.365303 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b94cb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(d0612ab6-de5e-4f61-9e1c-97f8237c996c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.366549 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="d0612ab6-de5e-4f61-9e1c-97f8237c996c" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.416112 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="d0612ab6-de5e-4f61-9e1c-97f8237c996c" Jan 21 16:04:11 crc kubenswrapper[4760]: E0121 16:04:11.416381 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.081822 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.082111 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5c7h64ch5b6h54bhb9h568hc6h699h68fh64h5c5h75h79hd7hd9h545h596h674h564h584h78h544h5ch589h54bhcbh5f4h86h9ch66ch679h576q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffzwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(06184570-059b-4132-a5b6-365e3e12e383): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.084763 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="06184570-059b-4132-a5b6-365e3e12e383" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.422171 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="06184570-059b-4132-a5b6-365e3e12e383" Jan 21 16:04:12 crc kubenswrapper[4760]: I0121 16:04:12.866611 4760 scope.go:117] "RemoveContainer" containerID="e6963d14d357f703576b0958af9bc058219608c4e7a2fbf733bfa1478e82dd20" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.872272 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.872834 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcttk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-fm5r8_openstack(bd396dae-aefd-4646-8418-cd57cb44d7b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.874064 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" podUID="bd396dae-aefd-4646-8418-cd57cb44d7b7" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.974838 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.975031 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp44b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-k6gph_openstack(1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:12 crc kubenswrapper[4760]: E0121 16:04:12.976962 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" podUID="1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.012747 4760 scope.go:117] "RemoveContainer" containerID="d6f701408593aa929077af793666fc048b04a8fc688afd2e498f5b02c2a8a245" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.444592 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" podUID="bd396dae-aefd-4646-8418-cd57cb44d7b7" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.445583 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" podUID="1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.577046 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jfrjn"] Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.649389 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79"] Jan 21 16:04:13 crc kubenswrapper[4760]: W0121 16:04:13.668869 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a0315f5_89b8_4589_b088_2ea2bb15e078.slice/crio-be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f WatchSource:0}: Error finding container be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f: Status 404 returned error can't find the container with id be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f Jan 21 16:04:13 crc kubenswrapper[4760]: I0121 16:04:13.857481 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.989860 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.990365 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8nqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-mxzx4_openstack(311ca2cc-2871-4326-a66d-7ebacf5d0739): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:13 crc kubenswrapper[4760]: E0121 16:04:13.991572 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" podUID="311ca2cc-2871-4326-a66d-7ebacf5d0739" Jan 21 16:04:14 crc kubenswrapper[4760]: I0121 16:04:14.451923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"be6df1a9bd4d132efbc601b66976caa45f02a325f21c525b9d46595c2985735f"} Jan 21 16:04:14 crc kubenswrapper[4760]: E0121 16:04:14.465413 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 16:04:14 crc kubenswrapper[4760]: E0121 16:04:14.465624 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxptk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-p7x9p_openstack(8fead1d9-342f-49d5-bf14-86767afa754f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:14 crc kubenswrapper[4760]: E0121 16:04:14.466845 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" podUID="8fead1d9-342f-49d5-bf14-86767afa754f" Jan 21 16:04:14 crc kubenswrapper[4760]: W0121 16:04:14.510721 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc17cd40e_6e7b_4c1e_9ca8_e6edc1248330.slice/crio-df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4 WatchSource:0}: Error finding container df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4: Status 404 returned error can't find the container with id df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4 Jan 21 16:04:14 crc kubenswrapper[4760]: W0121 16:04:14.516934 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ab8d081_832d_4e4c_92e6_94a97545613c.slice/crio-a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0 WatchSource:0}: Error finding container a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0: Status 404 returned error can't find the container with id a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0 Jan 21 16:04:14 crc kubenswrapper[4760]: I0121 16:04:14.744567 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 16:04:14 crc kubenswrapper[4760]: W0121 16:04:14.814654 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47448c69_3198_48d8_8623_9a339a934aca.slice/crio-c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48 WatchSource:0}: Error finding container c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48: Status 404 returned error can't find the container with id c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48 Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.023777 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.177475 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") pod \"311ca2cc-2871-4326-a66d-7ebacf5d0739\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.177631 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") pod \"311ca2cc-2871-4326-a66d-7ebacf5d0739\" (UID: \"311ca2cc-2871-4326-a66d-7ebacf5d0739\") " Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.178202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config" (OuterVolumeSpecName: "config") pod "311ca2cc-2871-4326-a66d-7ebacf5d0739" (UID: "311ca2cc-2871-4326-a66d-7ebacf5d0739"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.184519 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz" (OuterVolumeSpecName: "kube-api-access-l8nqz") pod "311ca2cc-2871-4326-a66d-7ebacf5d0739" (UID: "311ca2cc-2871-4326-a66d-7ebacf5d0739"). InnerVolumeSpecName "kube-api-access-l8nqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.279723 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8nqz\" (UniqueName: \"kubernetes.io/projected/311ca2cc-2871-4326-a66d-7ebacf5d0739-kube-api-access-l8nqz\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.279787 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/311ca2cc-2871-4326-a66d-7ebacf5d0739-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.471564 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79" event={"ID":"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330","Type":"ContainerStarted","Data":"df07cbf82e38a5a6b17156e84860f6d56505dffbfb4e0002429a5d5909a06fd4"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.473740 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9ab8d081-832d-4e4c-92e6-94a97545613c","Type":"ContainerStarted","Data":"a0bb16065b88dfd05ecf10ddede0414cc288b581dd7b894b34c43880d09338b0"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.477360 4760 generic.go:334] "Generic (PLEG): container finished" podID="b88b3abe-b642-4d65-b822-5b62d6095959" containerID="8eea9ad76cc64c9e786de47ec4008fecabf97bd5c2c31169f4f09e8aba5f86b1" exitCode=0 Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.477524 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"8eea9ad76cc64c9e786de47ec4008fecabf97bd5c2c31169f4f09e8aba5f86b1"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.480082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.480064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-mxzx4" event={"ID":"311ca2cc-2871-4326-a66d-7ebacf5d0739","Type":"ContainerDied","Data":"4283b5ca5806c476cb853b8fdf9c1f4cf1b91acf8fa6d2be66e4d8061dc95540"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.483742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47448c69-3198-48d8-8623-9a339a934aca","Type":"ContainerStarted","Data":"c2804b95eb2dcaaf60d1bbf08a196dbcea1955ff3b7391e34cc87501e93fdd48"} Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.589009 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.604056 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-mxzx4"] Jan 21 16:04:15 crc kubenswrapper[4760]: I0121 16:04:15.636640 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311ca2cc-2871-4326-a66d-7ebacf5d0739" path="/var/lib/kubelet/pods/311ca2cc-2871-4326-a66d-7ebacf5d0739/volumes" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.390272 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.493862 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" event={"ID":"8fead1d9-342f-49d5-bf14-86767afa754f","Type":"ContainerDied","Data":"30da348f69a4121551a769cdf657e2d763a496fe50aee4af7b0fce21e3a41abd"} Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.493966 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-p7x9p" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.500067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerStarted","Data":"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1"} Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.505808 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") pod \"8fead1d9-342f-49d5-bf14-86767afa754f\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.505902 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") pod \"8fead1d9-342f-49d5-bf14-86767afa754f\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.505984 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") pod \"8fead1d9-342f-49d5-bf14-86767afa754f\" (UID: \"8fead1d9-342f-49d5-bf14-86767afa754f\") " Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.506551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config" (OuterVolumeSpecName: "config") pod "8fead1d9-342f-49d5-bf14-86767afa754f" (UID: "8fead1d9-342f-49d5-bf14-86767afa754f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.506961 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fead1d9-342f-49d5-bf14-86767afa754f" (UID: "8fead1d9-342f-49d5-bf14-86767afa754f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.538562 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk" (OuterVolumeSpecName: "kube-api-access-fxptk") pod "8fead1d9-342f-49d5-bf14-86767afa754f" (UID: "8fead1d9-342f-49d5-bf14-86767afa754f"). InnerVolumeSpecName "kube-api-access-fxptk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.608917 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.608955 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fead1d9-342f-49d5-bf14-86767afa754f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.608968 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxptk\" (UniqueName: \"kubernetes.io/projected/8fead1d9-342f-49d5-bf14-86767afa754f-kube-api-access-fxptk\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.867203 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:04:16 crc kubenswrapper[4760]: I0121 16:04:16.879559 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-p7x9p"] Jan 21 16:04:17 crc kubenswrapper[4760]: I0121 16:04:17.639566 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fead1d9-342f-49d5-bf14-86767afa754f" path="/var/lib/kubelet/pods/8fead1d9-342f-49d5-bf14-86767afa754f/volumes" Jan 21 16:04:19 crc kubenswrapper[4760]: I0121 16:04:19.551006 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerStarted","Data":"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f"} Jan 21 16:04:20 crc kubenswrapper[4760]: I0121 16:04:20.946229 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:20 crc kubenswrapper[4760]: I0121 16:04:20.946807 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:21 crc kubenswrapper[4760]: I0121 16:04:21.590757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerStarted","Data":"26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.605461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9ab8d081-832d-4e4c-92e6-94a97545613c","Type":"ContainerStarted","Data":"b79c9adc2709bfd044fcd677f5cbc08aa4aacf32baa57a8a241f590697a38129"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.607339 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"63cf54a82b66daadb9ce31e78f741df151255cdcb5e33d38249e921768fe9bcf"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.609194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerStarted","Data":"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.609719 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.611054 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47448c69-3198-48d8-8623-9a339a934aca","Type":"ContainerStarted","Data":"e788eab3e1737667b3bb9727faeb9bb1fcb609a9d0ce3455858843576716c851"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.612588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79" event={"ID":"c17cd40e-6e7b-4c1e-9ca8-e6edc1248330","Type":"ContainerStarted","Data":"6f8277640fc76ef42613eedca65adc002fe77ded6e80618e70e2da41302708db"} Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.656940 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ltr79" podStartSLOduration=28.02079335 podStartE2EDuration="33.65691685s" podCreationTimestamp="2026-01-21 16:03:49 +0000 UTC" firstStartedPulling="2026-01-21 16:04:14.513666348 +0000 UTC m=+1025.181435936" lastFinishedPulling="2026-01-21 16:04:20.149789858 +0000 UTC m=+1030.817559436" observedRunningTime="2026-01-21 16:04:22.653784717 +0000 UTC m=+1033.321554295" watchObservedRunningTime="2026-01-21 16:04:22.65691685 +0000 UTC m=+1033.324686428" Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.681880 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bxqfs" podStartSLOduration=5.331256642 podStartE2EDuration="44.681854541s" podCreationTimestamp="2026-01-21 16:03:38 +0000 UTC" firstStartedPulling="2026-01-21 16:03:40.798388294 +0000 UTC m=+991.466157872" lastFinishedPulling="2026-01-21 16:04:20.148986193 +0000 UTC m=+1030.816755771" observedRunningTime="2026-01-21 16:04:22.677481655 +0000 UTC m=+1033.345251233" watchObservedRunningTime="2026-01-21 16:04:22.681854541 +0000 UTC m=+1033.349624119" Jan 21 16:04:22 crc kubenswrapper[4760]: I0121 16:04:22.695774 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.257356465 podStartE2EDuration="39.695745959s" podCreationTimestamp="2026-01-21 16:03:43 +0000 UTC" firstStartedPulling="2026-01-21 16:03:44.695998646 +0000 UTC m=+995.363768224" lastFinishedPulling="2026-01-21 16:04:20.13438814 +0000 UTC m=+1030.802157718" observedRunningTime="2026-01-21 16:04:22.694435424 +0000 UTC m=+1033.362205002" watchObservedRunningTime="2026-01-21 16:04:22.695745959 +0000 UTC m=+1033.363515537" Jan 21 16:04:23 crc kubenswrapper[4760]: I0121 16:04:23.675577 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ltr79" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.012829 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-sz9bq"] Jan 21 16:04:24 crc kubenswrapper[4760]: E0121 16:04:24.013849 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.013940 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:24 crc kubenswrapper[4760]: E0121 16:04:24.014007 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-utilities" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.014089 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-utilities" Jan 21 16:04:24 crc kubenswrapper[4760]: E0121 16:04:24.014177 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-content" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.014247 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="extract-content" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.014483 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d267ab8-12dc-43a2-8199-7885783e8601" containerName="registry-server" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.015092 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.017741 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.029490 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sz9bq"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175389 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-config\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175473 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-combined-ca-bundle\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175543 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovs-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovn-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.175742 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkdb\" (UniqueName: \"kubernetes.io/projected/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-kube-api-access-qgkdb\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.183721 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.241805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.243101 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.248351 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.252146 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.279980 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.280038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-config\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.280955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-config\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281011 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-combined-ca-bundle\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281040 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovn-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovs-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281082 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkdb\" (UniqueName: \"kubernetes.io/projected/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-kube-api-access-qgkdb\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovn-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.281947 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-ovs-rundir\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.291750 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.293335 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-combined-ca-bundle\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.304193 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkdb\" (UniqueName: \"kubernetes.io/projected/0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc-kube-api-access-qgkdb\") pod \"ovn-controller-metrics-sz9bq\" (UID: \"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc\") " pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.353205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-sz9bq" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382807 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382857 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382900 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.382930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.485833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.486556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.486629 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.486746 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.487416 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.487658 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.488170 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.522612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"dnsmasq-dns-5bf47b49b7-xbl97\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.527414 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.568801 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.640733 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.642270 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.648621 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.664622 4760 generic.go:334] "Generic (PLEG): container finished" podID="1a0315f5-89b8-4589-b088-2ea2bb15e078" containerID="63cf54a82b66daadb9ce31e78f741df151255cdcb5e33d38249e921768fe9bcf" exitCode=0 Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.666306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerDied","Data":"63cf54a82b66daadb9ce31e78f741df151255cdcb5e33d38249e921768fe9bcf"} Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.705398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.799690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800273 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800502 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.800556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.844626 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.906490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") pod \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.906648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") pod \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.906950 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") pod \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\" (UID: \"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef\") " Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907005 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" (UID: "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907279 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907431 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907459 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907523 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907599 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.907669 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config" (OuterVolumeSpecName: "config") pod "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" (UID: "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.908704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.908992 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.909181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.909414 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.912912 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b" (OuterVolumeSpecName: "kube-api-access-jp44b") pod "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" (UID: "1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef"). InnerVolumeSpecName "kube-api-access-jp44b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.913746 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-sz9bq"] Jan 21 16:04:24 crc kubenswrapper[4760]: I0121 16:04:24.928604 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"dnsmasq-dns-8554648995-vpn5h\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.008966 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp44b\" (UniqueName: \"kubernetes.io/projected/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-kube-api-access-jp44b\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.008999 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.083979 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.142007 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.210869 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") pod \"bd396dae-aefd-4646-8418-cd57cb44d7b7\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.211473 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") pod \"bd396dae-aefd-4646-8418-cd57cb44d7b7\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.211624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") pod \"bd396dae-aefd-4646-8418-cd57cb44d7b7\" (UID: \"bd396dae-aefd-4646-8418-cd57cb44d7b7\") " Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.212536 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config" (OuterVolumeSpecName: "config") pod "bd396dae-aefd-4646-8418-cd57cb44d7b7" (UID: "bd396dae-aefd-4646-8418-cd57cb44d7b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.213988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd396dae-aefd-4646-8418-cd57cb44d7b7" (UID: "bd396dae-aefd-4646-8418-cd57cb44d7b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.218133 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk" (OuterVolumeSpecName: "kube-api-access-lcttk") pod "bd396dae-aefd-4646-8418-cd57cb44d7b7" (UID: "bd396dae-aefd-4646-8418-cd57cb44d7b7"). InnerVolumeSpecName "kube-api-access-lcttk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.314056 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcttk\" (UniqueName: \"kubernetes.io/projected/bd396dae-aefd-4646-8418-cd57cb44d7b7-kube-api-access-lcttk\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.314098 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.314110 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd396dae-aefd-4646-8418-cd57cb44d7b7-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.322404 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:25 crc kubenswrapper[4760]: W0121 16:04:25.327290 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda16264e_f8ca_4b1a_bf0e_6b59fbf0ec23.slice/crio-938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980 WatchSource:0}: Error finding container 938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980: Status 404 returned error can't find the container with id 938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980 Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.338897 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.673429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerStarted","Data":"48c80916c91e39397ff5a93ea5bc1cf8687a4f0ad22dad533560450611beba05"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.674303 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sz9bq" event={"ID":"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc","Type":"ContainerStarted","Data":"e489ee4287bf7365a92c516b7a1274122da138bba0f0b4fc6ff40bf265fbeec0"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.675239 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" event={"ID":"bd396dae-aefd-4646-8418-cd57cb44d7b7","Type":"ContainerDied","Data":"8dd02b336930ac5d7bceca40dc5e196ba41d6ce619474ebbe3c08a39a088c0d0"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.675343 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fm5r8" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.677189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerStarted","Data":"938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.678068 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" event={"ID":"1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef","Type":"ContainerDied","Data":"e1f4f036266dec528347239c822ce39fd90f90c68aec27f36ca257d2569c22b3"} Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.678121 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k6gph" Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.710923 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.714933 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fm5r8"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.753946 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:04:25 crc kubenswrapper[4760]: I0121 16:04:25.773893 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k6gph"] Jan 21 16:04:27 crc kubenswrapper[4760]: I0121 16:04:27.636731 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef" path="/var/lib/kubelet/pods/1afcd4c8-23d6-4e7e-9665-1f8ed0b5b3ef/volumes" Jan 21 16:04:27 crc kubenswrapper[4760]: I0121 16:04:27.637835 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd396dae-aefd-4646-8418-cd57cb44d7b7" path="/var/lib/kubelet/pods/bd396dae-aefd-4646-8418-cd57cb44d7b7/volumes" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.341404 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.341480 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.393133 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.774297 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:29 crc kubenswrapper[4760]: I0121 16:04:29.832712 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:04:31 crc kubenswrapper[4760]: I0121 16:04:31.737739 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bxqfs" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" containerID="cri-o://26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" gracePeriod=2 Jan 21 16:04:33 crc kubenswrapper[4760]: I0121 16:04:33.930123 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.341809 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.342648 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.343224 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:04:39 crc kubenswrapper[4760]: E0121 16:04:39.343314 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-bxqfs" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:04:41 crc kubenswrapper[4760]: I0121 16:04:41.816004 4760 generic.go:334] "Generic (PLEG): container finished" podID="b88b3abe-b642-4d65-b822-5b62d6095959" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" exitCode=0 Jan 21 16:04:41 crc kubenswrapper[4760]: I0121 16:04:41.816070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f"} Jan 21 16:04:41 crc kubenswrapper[4760]: I0121 16:04:41.819539 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"4fd9f1e6cb871b557ec521f9efa6f556b021ee7feade09b4b4b8df6fd9d9ed96"} Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.315730 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4198104635/3\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.316528 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n556h5cbh5cbh5b9h6h5bch68dh55fh5d8h9fh5d4h57hf9h598hcbh569h9fhfdh589hcdh64h667h667h688h94h678h5b8h555h58fh5c6h576h5fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4q7zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(47448c69-3198-48d8-8623-9a339a934aca): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage4198104635/3\": happened during read: context canceled" logger="UnhandledError" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.317761 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage4198104635/3\\\": happened during read: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.378225 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.484594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") pod \"b88b3abe-b642-4d65-b822-5b62d6095959\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.484693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") pod \"b88b3abe-b642-4d65-b822-5b62d6095959\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.484851 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") pod \"b88b3abe-b642-4d65-b822-5b62d6095959\" (UID: \"b88b3abe-b642-4d65-b822-5b62d6095959\") " Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.485813 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities" (OuterVolumeSpecName: "utilities") pod "b88b3abe-b642-4d65-b822-5b62d6095959" (UID: "b88b3abe-b642-4d65-b822-5b62d6095959"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.493261 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs" (OuterVolumeSpecName: "kube-api-access-4f5vs") pod "b88b3abe-b642-4d65-b822-5b62d6095959" (UID: "b88b3abe-b642-4d65-b822-5b62d6095959"). InnerVolumeSpecName "kube-api-access-4f5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.532084 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b88b3abe-b642-4d65-b822-5b62d6095959" (UID: "b88b3abe-b642-4d65-b822-5b62d6095959"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.586746 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.586794 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f5vs\" (UniqueName: \"kubernetes.io/projected/b88b3abe-b642-4d65-b822-5b62d6095959-kube-api-access-4f5vs\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.586821 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b88b3abe-b642-4d65-b822-5b62d6095959-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.766057 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.766236 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n7hfch56dh5d7hb4h54ch55dh8h585h55ch86h57dh55dh687h566h9ch58h99h5dch56chbch688hfbh564h8fh5cch5d8h596hbch694hd8h4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgkdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-sz9bq_openstack(0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.769692 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-sz9bq" podUID="0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.785047 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.785307 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n56bhd7h5bdh79hf5h59bh8bh544h547h5cdh98h5f5h5bfh684h687hc4hfdh5cfh5ddh9fh586h7fhddhbfh7chb8h88h8hd7h5b9hd4h68dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwh7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(9ab8d081-832d-4e4c-92e6-94a97545613c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.786606 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.866169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxqfs" event={"ID":"b88b3abe-b642-4d65-b822-5b62d6095959","Type":"ContainerDied","Data":"f99d79edb43f406eefad8b2bf5c43d8cb6c67afb47dc2beeecadfa37fc9351b3"} Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.866574 4760 scope.go:117] "RemoveContainer" containerID="26e1e02f803fdb65f3d7e9e5c13d86eeea9303f772223feee9986aa2994f5e8f" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.866807 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxqfs" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.869834 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-sz9bq" podUID="0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.870146 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:46 crc kubenswrapper[4760]: E0121 16:04:46.870373 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:46 crc kubenswrapper[4760]: I0121 16:04:46.991459 4760 scope.go:117] "RemoveContainer" containerID="8eea9ad76cc64c9e786de47ec4008fecabf97bd5c2c31169f4f09e8aba5f86b1" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.027141 4760 scope.go:117] "RemoveContainer" containerID="916d38b69e34b3c7bfd101d0e1dafae28fb9daba7502d1230c6e3a8a77f5e9b8" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.028463 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.035165 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxqfs"] Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.632802 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" path="/var/lib/kubelet/pods/b88b3abe-b642-4d65-b822-5b62d6095959/volumes" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.701107 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.740958 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.872535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerStarted","Data":"42cbba31963b815d7127acf22822c64b9840a54e6d1976333a3ae2d7a23592ae"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.874221 4760 generic.go:334] "Generic (PLEG): container finished" podID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerID="8317b3fdc217b7dd117467332217df1262840073d360445fc7d8e24e5aa0880b" exitCode=0 Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.874295 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerDied","Data":"8317b3fdc217b7dd117467332217df1262840073d360445fc7d8e24e5aa0880b"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.883645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jfrjn" event={"ID":"1a0315f5-89b8-4589-b088-2ea2bb15e078","Type":"ContainerStarted","Data":"5a46ced7f280cd2105ee37a7d22ae2e2d4c939e6e4a5511def0756be4bddf06b"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.883917 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.892066 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"06184570-059b-4132-a5b6-365e3e12e383","Type":"ContainerStarted","Data":"8a1a0549ca962e9674c6c78be6d84ace96b94d441284778e15a28f67453558a0"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.892560 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.905887 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerStarted","Data":"0a5d3c9104c8eb1e808ed580c2980a8b30b9bc7876cbfd307d895089a165e7c3"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.909142 4760 generic.go:334] "Generic (PLEG): container finished" podID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" exitCode=0 Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.910625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerDied","Data":"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c"} Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.913362 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:47 crc kubenswrapper[4760]: E0121 16:04:47.911649 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.929686 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jfrjn" podStartSLOduration=52.475277063 podStartE2EDuration="58.929624483s" podCreationTimestamp="2026-01-21 16:03:49 +0000 UTC" firstStartedPulling="2026-01-21 16:04:13.673299814 +0000 UTC m=+1024.341069392" lastFinishedPulling="2026-01-21 16:04:20.127647234 +0000 UTC m=+1030.795416812" observedRunningTime="2026-01-21 16:04:47.924937381 +0000 UTC m=+1058.592706979" watchObservedRunningTime="2026-01-21 16:04:47.929624483 +0000 UTC m=+1058.597394061" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.953427 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.468053296 podStartE2EDuration="1m6.953406199s" podCreationTimestamp="2026-01-21 16:03:41 +0000 UTC" firstStartedPulling="2026-01-21 16:03:43.263047521 +0000 UTC m=+993.930817089" lastFinishedPulling="2026-01-21 16:04:46.748400414 +0000 UTC m=+1057.416169992" observedRunningTime="2026-01-21 16:04:47.947340831 +0000 UTC m=+1058.615110429" watchObservedRunningTime="2026-01-21 16:04:47.953406199 +0000 UTC m=+1058.621175777" Jan 21 16:04:47 crc kubenswrapper[4760]: I0121 16:04:47.968949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.791462 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: E0121 16:04:48.795664 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.837908 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.925468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerStarted","Data":"2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1"} Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.925609 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.929015 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerStarted","Data":"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508"} Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.929051 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:48 crc kubenswrapper[4760]: E0121 16:04:48.929683 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.929959 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.930124 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:48 crc kubenswrapper[4760]: E0121 16:04:48.931504 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.953561 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vpn5h" podStartSLOduration=3.516640885 podStartE2EDuration="24.953535165s" podCreationTimestamp="2026-01-21 16:04:24 +0000 UTC" firstStartedPulling="2026-01-21 16:04:25.329192661 +0000 UTC m=+1035.996962239" lastFinishedPulling="2026-01-21 16:04:46.766086941 +0000 UTC m=+1057.433856519" observedRunningTime="2026-01-21 16:04:48.945218932 +0000 UTC m=+1059.612988500" watchObservedRunningTime="2026-01-21 16:04:48.953535165 +0000 UTC m=+1059.621304743" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.977503 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" podStartSLOduration=3.553261738 podStartE2EDuration="24.977467775s" podCreationTimestamp="2026-01-21 16:04:24 +0000 UTC" firstStartedPulling="2026-01-21 16:04:25.342955185 +0000 UTC m=+1036.010724763" lastFinishedPulling="2026-01-21 16:04:46.767161222 +0000 UTC m=+1057.434930800" observedRunningTime="2026-01-21 16:04:48.966564301 +0000 UTC m=+1059.634333879" watchObservedRunningTime="2026-01-21 16:04:48.977467775 +0000 UTC m=+1059.645237353" Jan 21 16:04:48 crc kubenswrapper[4760]: I0121 16:04:48.989470 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 16:04:49 crc kubenswrapper[4760]: I0121 16:04:49.939060 4760 generic.go:334] "Generic (PLEG): container finished" podID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" exitCode=0 Jan 21 16:04:49 crc kubenswrapper[4760]: I0121 16:04:49.939175 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerDied","Data":"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1"} Jan 21 16:04:49 crc kubenswrapper[4760]: E0121 16:04:49.941815 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="47448c69-3198-48d8-8623-9a339a934aca" Jan 21 16:04:49 crc kubenswrapper[4760]: E0121 16:04:49.943874 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.946662 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.946987 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.947035 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.947765 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.947824 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f" gracePeriod=600 Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.950063 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerStarted","Data":"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2"} Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.950724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:04:50 crc kubenswrapper[4760]: E0121 16:04:50.950819 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="9ab8d081-832d-4e4c-92e6-94a97545613c" Jan 21 16:04:50 crc kubenswrapper[4760]: I0121 16:04:50.980209 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.945340346 podStartE2EDuration="1m13.980183807s" podCreationTimestamp="2026-01-21 16:03:37 +0000 UTC" firstStartedPulling="2026-01-21 16:03:39.810520402 +0000 UTC m=+990.478289980" lastFinishedPulling="2026-01-21 16:04:12.845363863 +0000 UTC m=+1023.513133441" observedRunningTime="2026-01-21 16:04:50.974875163 +0000 UTC m=+1061.642644741" watchObservedRunningTime="2026-01-21 16:04:50.980183807 +0000 UTC m=+1061.647953385" Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.243084 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:04:51 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:04:51 crc kubenswrapper[4760]: > Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.958845 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" exitCode=0 Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.958935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerDied","Data":"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f"} Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.965995 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f" exitCode=0 Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.966169 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f"} Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.966241 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df"} Jan 21 16:04:51 crc kubenswrapper[4760]: I0121 16:04:51.966264 4760 scope.go:117] "RemoveContainer" containerID="4a4f642c0c2b59b10378fd5974f35f9fb23b198f62bb5a4dbe3d03ad54a3fd8b" Jan 21 16:04:53 crc kubenswrapper[4760]: I0121 16:04:53.095399 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerStarted","Data":"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864"} Jan 21 16:04:54 crc kubenswrapper[4760]: I0121 16:04:54.105662 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 16:04:54 crc kubenswrapper[4760]: I0121 16:04:54.136167 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.020922166 podStartE2EDuration="1m17.136137059s" podCreationTimestamp="2026-01-21 16:03:37 +0000 UTC" firstStartedPulling="2026-01-21 16:03:40.260964505 +0000 UTC m=+990.928734083" lastFinishedPulling="2026-01-21 16:04:13.376179398 +0000 UTC m=+1024.043948976" observedRunningTime="2026-01-21 16:04:54.130949447 +0000 UTC m=+1064.798719035" watchObservedRunningTime="2026-01-21 16:04:54.136137059 +0000 UTC m=+1064.803906637" Jan 21 16:04:54 crc kubenswrapper[4760]: I0121 16:04:54.570554 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.085639 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.113791 4760 generic.go:334] "Generic (PLEG): container finished" podID="29bd8985-5f22-46e9-9868-607bf9be273e" containerID="42cbba31963b815d7127acf22822c64b9840a54e6d1976333a3ae2d7a23592ae" exitCode=0 Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.113888 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerDied","Data":"42cbba31963b815d7127acf22822c64b9840a54e6d1976333a3ae2d7a23592ae"} Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.168681 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.169018 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" containerID="cri-o://d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" gracePeriod=10 Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.709545 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913300 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913449 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.913613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") pod \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\" (UID: \"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23\") " Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.928840 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz" (OuterVolumeSpecName: "kube-api-access-pzdwz") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "kube-api-access-pzdwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.959981 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config" (OuterVolumeSpecName: "config") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.968156 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:55 crc kubenswrapper[4760]: I0121 16:04:55.971548 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" (UID: "da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015713 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015754 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzdwz\" (UniqueName: \"kubernetes.io/projected/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-kube-api-access-pzdwz\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015773 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.015791 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.123924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29bd8985-5f22-46e9-9868-607bf9be273e","Type":"ContainerStarted","Data":"8320fc70c198b576e0ab3e7096b613868a1ed1d1805ec26d93af4062646e7f7a"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.125836 4760 generic.go:334] "Generic (PLEG): container finished" podID="d0612ab6-de5e-4f61-9e1c-97f8237c996c" containerID="0a5d3c9104c8eb1e808ed580c2980a8b30b9bc7876cbfd307d895089a165e7c3" exitCode=0 Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.125889 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerDied","Data":"0a5d3c9104c8eb1e808ed580c2980a8b30b9bc7876cbfd307d895089a165e7c3"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128106 4760 generic.go:334] "Generic (PLEG): container finished" podID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" exitCode=0 Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerDied","Data":"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128180 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" event={"ID":"da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23","Type":"ContainerDied","Data":"938c73d1dc12b97bce56d26ad255edd53235dd723a1b39465bcba8108316f980"} Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128187 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-xbl97" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.128206 4760 scope.go:117] "RemoveContainer" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.173075 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.603075936 podStartE2EDuration="1m17.173052381s" podCreationTimestamp="2026-01-21 16:03:39 +0000 UTC" firstStartedPulling="2026-01-21 16:03:41.179658473 +0000 UTC m=+991.847428051" lastFinishedPulling="2026-01-21 16:04:46.749634918 +0000 UTC m=+1057.417404496" observedRunningTime="2026-01-21 16:04:56.16330375 +0000 UTC m=+1066.831073338" watchObservedRunningTime="2026-01-21 16:04:56.173052381 +0000 UTC m=+1066.840821949" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.247801 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:04:56 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:04:56 crc kubenswrapper[4760]: > Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.264596 4760 scope.go:117] "RemoveContainer" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.291043 4760 scope.go:117] "RemoveContainer" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" Jan 21 16:04:56 crc kubenswrapper[4760]: E0121 16:04:56.292503 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508\": container with ID starting with d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508 not found: ID does not exist" containerID="d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.292551 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508"} err="failed to get container status \"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508\": rpc error: code = NotFound desc = could not find container \"d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508\": container with ID starting with d6e1cbe9084fa3c4585b1cc835b11f2fe1a7fa534f3660bbc34d8279a5359508 not found: ID does not exist" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.292581 4760 scope.go:117] "RemoveContainer" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" Jan 21 16:04:56 crc kubenswrapper[4760]: E0121 16:04:56.293164 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c\": container with ID starting with 618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c not found: ID does not exist" containerID="618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.293190 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c"} err="failed to get container status \"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c\": rpc error: code = NotFound desc = could not find container \"618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c\": container with ID starting with 618b23a439db0c92941f1772631b859d7f948a8e1341b668fbc17678ad724b5c not found: ID does not exist" Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.295361 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.302219 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-xbl97"] Jan 21 16:04:56 crc kubenswrapper[4760]: I0121 16:04:56.977788 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 16:04:57 crc kubenswrapper[4760]: I0121 16:04:57.141188 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d0612ab6-de5e-4f61-9e1c-97f8237c996c","Type":"ContainerStarted","Data":"54ab330e9875ed8b29cc5c8bc5d90ec4f48a3149a52fc14051be86c78dd4c549"} Jan 21 16:04:57 crc kubenswrapper[4760]: I0121 16:04:57.166851 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.803939333 podStartE2EDuration="1m17.166831484s" podCreationTimestamp="2026-01-21 16:03:40 +0000 UTC" firstStartedPulling="2026-01-21 16:03:43.405999235 +0000 UTC m=+994.073768813" lastFinishedPulling="2026-01-21 16:04:46.768891386 +0000 UTC m=+1057.436660964" observedRunningTime="2026-01-21 16:04:57.165586689 +0000 UTC m=+1067.833356287" watchObservedRunningTime="2026-01-21 16:04:57.166831484 +0000 UTC m=+1067.834601062" Jan 21 16:04:57 crc kubenswrapper[4760]: I0121 16:04:57.634279 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" path="/var/lib/kubelet/pods/da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23/volumes" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.177111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-sz9bq" event={"ID":"0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc","Type":"ContainerStarted","Data":"20d4228d2adc73a688cfac073fbe0d52cffa4514d25db1a190729af6a302e4d3"} Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.200815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-sz9bq" podStartSLOduration=3.058886599 podStartE2EDuration="37.200792272s" podCreationTimestamp="2026-01-21 16:04:23 +0000 UTC" firstStartedPulling="2026-01-21 16:04:24.921231646 +0000 UTC m=+1035.589001224" lastFinishedPulling="2026-01-21 16:04:59.063137319 +0000 UTC m=+1069.730906897" observedRunningTime="2026-01-21 16:05:00.194679513 +0000 UTC m=+1070.862449091" watchObservedRunningTime="2026-01-21 16:05:00.200792272 +0000 UTC m=+1070.868561850" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.588020 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.588603 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 16:05:00 crc kubenswrapper[4760]: I0121 16:05:00.671503 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.250372 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:01 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:01 crc kubenswrapper[4760]: > Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.253248 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666247 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666751 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-content" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666775 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-content" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666869 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666879 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666902 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666911 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666930 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-utilities" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666938 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="extract-utilities" Jan 21 16:05:01 crc kubenswrapper[4760]: E0121 16:05:01.666951 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="init" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.666960 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="init" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.667179 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="da16264e-f8ca-4b1a-bf0e-6b59fbf0ec23" containerName="dnsmasq-dns" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.667201 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88b3abe-b642-4d65-b822-5b62d6095959" containerName="registry-server" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.668036 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.680926 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.682472 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.684663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.691022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.696510 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.821565 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.822168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.822211 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.822269 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.846783 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.847886 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.854303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923625 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923687 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923756 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.923795 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.924893 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.925114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.944779 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"keystone-db-create-995cl\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " pod="openstack/keystone-db-create-995cl" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.950350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"keystone-7726-account-create-update-jdlpj\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.982025 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.984429 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.987716 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 16:05:01 crc kubenswrapper[4760]: I0121 16:05:01.995692 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.015390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.015446 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.025246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.025522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.026940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.044065 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"placement-db-create-chqpt\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.090223 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.115204 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.126949 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.127090 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.171173 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.228615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.228743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.229708 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.270095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"placement-f76c-account-create-update-9wpkx\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.326718 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.529991 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.622389 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.636662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:05:02 crc kubenswrapper[4760]: W0121 16:05:02.638072 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3de43463_27f1_4fbe_959a_6c6446414177.slice/crio-9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f WatchSource:0}: Error finding container 9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f: Status 404 returned error can't find the container with id 9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f Jan 21 16:05:02 crc kubenswrapper[4760]: I0121 16:05:02.646700 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.223116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerStarted","Data":"dcfa2b2f0f9e697f31b5c501f7104f006aeaa37c3e5ebe4b721fa5414a7ea15d"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.224393 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerStarted","Data":"9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.226581 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9ab8d081-832d-4e4c-92e6-94a97545613c","Type":"ContainerStarted","Data":"a523f91d1215ad2b3dbb2078e4caaa79f246a6c065434dcc2c645c7711f6bb95"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.228116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerStarted","Data":"a87e3f525b4e1f6813d35c6be9149ed1e28b3db8b0c88c0dbd185c21db488892"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.230440 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerStarted","Data":"7d936c7e1f63a5b1fbcb03b390c6a84eacf0742287e4a6f22ae20db57e697726"} Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.251460 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=67.621292701 podStartE2EDuration="1m13.251433649s" podCreationTimestamp="2026-01-21 16:03:50 +0000 UTC" firstStartedPulling="2026-01-21 16:04:14.521833051 +0000 UTC m=+1025.189602629" lastFinishedPulling="2026-01-21 16:04:20.151973999 +0000 UTC m=+1030.819743577" observedRunningTime="2026-01-21 16:05:03.251226114 +0000 UTC m=+1073.918995732" watchObservedRunningTime="2026-01-21 16:05:03.251433649 +0000 UTC m=+1073.919203217" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.875853 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.877251 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.939136 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.965947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:03 crc kubenswrapper[4760]: I0121 16:05:03.966089 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071455 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071594 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.071694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.073197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.073470 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.073477 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.074022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.102852 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"dnsmasq-dns-b8fbc5445-gmwwz\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.228783 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.243811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerStarted","Data":"10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.247412 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerStarted","Data":"0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.249601 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerStarted","Data":"f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.252179 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerStarted","Data":"b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4"} Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.276678 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-chqpt" podStartSLOduration=3.276652588 podStartE2EDuration="3.276652588s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.268342675 +0000 UTC m=+1074.936112253" watchObservedRunningTime="2026-01-21 16:05:04.276652588 +0000 UTC m=+1074.944422166" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.317287 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-995cl" podStartSLOduration=3.317258014 podStartE2EDuration="3.317258014s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.308978672 +0000 UTC m=+1074.976748300" watchObservedRunningTime="2026-01-21 16:05:04.317258014 +0000 UTC m=+1074.985027592" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.332815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f76c-account-create-update-9wpkx" podStartSLOduration=3.332786229 podStartE2EDuration="3.332786229s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.325536877 +0000 UTC m=+1074.993306455" watchObservedRunningTime="2026-01-21 16:05:04.332786229 +0000 UTC m=+1075.000555807" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.350448 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7726-account-create-update-jdlpj" podStartSLOduration=3.350424235 podStartE2EDuration="3.350424235s" podCreationTimestamp="2026-01-21 16:05:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:04.34561039 +0000 UTC m=+1075.013379968" watchObservedRunningTime="2026-01-21 16:05:04.350424235 +0000 UTC m=+1075.018193813" Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.547386 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:05:04 crc kubenswrapper[4760]: W0121 16:05:04.554630 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15134486_2d84_4c09_9a92_4df82dfcf01a.slice/crio-20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7 WatchSource:0}: Error finding container 20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7: Status 404 returned error can't find the container with id 20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7 Jan 21 16:05:04 crc kubenswrapper[4760]: I0121 16:05:04.997979 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.006407 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.009419 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.009453 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.009689 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.010665 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rl8ml" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.018678 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.020040 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.160313 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200113 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-cache\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200309 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-lock\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.200454 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvfc\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-kube-api-access-4pvfc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.269480 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"47448c69-3198-48d8-8623-9a339a934aca","Type":"ContainerStarted","Data":"6d6b3b6c1cc71ee3f31215ae18f2044be08fb9576bdef34aa2657a11cdc49d71"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.272583 4760 generic.go:334] "Generic (PLEG): container finished" podID="956c0478-0da7-419e-b003-65e479971040" containerID="10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8" exitCode=0 Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.272654 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerDied","Data":"10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.274274 4760 generic.go:334] "Generic (PLEG): container finished" podID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerID="433441af784888a0e207259e7f17ab26778cc42126d4f3c1404bb39f079669b2" exitCode=0 Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.274497 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerDied","Data":"433441af784888a0e207259e7f17ab26778cc42126d4f3c1404bb39f079669b2"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.274582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerStarted","Data":"20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7"} Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.295332 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=70.965440145 podStartE2EDuration="1m16.295299468s" podCreationTimestamp="2026-01-21 16:03:49 +0000 UTC" firstStartedPulling="2026-01-21 16:04:14.819898084 +0000 UTC m=+1025.487667662" lastFinishedPulling="2026-01-21 16:04:20.149757407 +0000 UTC m=+1030.817526985" observedRunningTime="2026-01-21 16:05:05.29230872 +0000 UTC m=+1075.960078318" watchObservedRunningTime="2026-01-21 16:05:05.295299468 +0000 UTC m=+1075.963069046" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301645 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-cache\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301811 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-lock\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301839 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvfc\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-kube-api-access-4pvfc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.301918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.302486 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.302716 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-cache\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.302805 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.302832 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.302904 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:05.802876127 +0000 UTC m=+1076.470645875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.302953 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-lock\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.336635 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvfc\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-kube-api-access-4pvfc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.364072 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.592600 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vscfw"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.594690 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.598348 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.598430 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.598733 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.603759 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vscfw"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712139 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712195 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712256 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.712599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.715010 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.716311 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.720139 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.720274 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.720885 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.721007 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rp9bx" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.744543 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.814624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-config\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.814945 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dcns\" (UniqueName: \"kubernetes.io/projected/50c45f6c-b35d-41f8-b358-afaf380d8f08-kube-api-access-2dcns\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815147 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815292 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815441 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815570 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815807 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-scripts\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.815925 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816027 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816294 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816472 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816597 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.816704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.817059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.817430 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.817665 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.817926 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: E0121 16:05:05.818613 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:06.818566682 +0000 UTC m=+1077.486336430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.819260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.822086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.823005 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.824550 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.838032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"swift-ring-rebalance-vscfw\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.918729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.918865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-scripts\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.918923 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-config\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919188 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dcns\" (UniqueName: \"kubernetes.io/projected/50c45f6c-b35d-41f8-b358-afaf380d8f08-kube-api-access-2dcns\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919218 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.919778 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.920839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-scripts\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.920978 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c45f6c-b35d-41f8-b358-afaf380d8f08-config\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.925665 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.925755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.927010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c45f6c-b35d-41f8-b358-afaf380d8f08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.940555 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dcns\" (UniqueName: \"kubernetes.io/projected/50c45f6c-b35d-41f8-b358-afaf380d8f08-kube-api-access-2dcns\") pod \"ovn-northd-0\" (UID: \"50c45f6c-b35d-41f8-b358-afaf380d8f08\") " pod="openstack/ovn-northd-0" Jan 21 16:05:05 crc kubenswrapper[4760]: I0121 16:05:05.967517 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.040391 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.372883 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerStarted","Data":"2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.373987 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.384268 4760 generic.go:334] "Generic (PLEG): container finished" podID="3de43463-27f1-4fbe-959a-6c6446414177" containerID="0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.384407 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerDied","Data":"0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.387981 4760 generic.go:334] "Generic (PLEG): container finished" podID="1213619b-eee7-4221-9083-06362fc707f5" containerID="f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.388061 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerDied","Data":"f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.391498 4760 generic.go:334] "Generic (PLEG): container finished" podID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerID="b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4" exitCode=0 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.391703 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerDied","Data":"b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4"} Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.391865 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:06 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:06 crc kubenswrapper[4760]: > Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.418944 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" podStartSLOduration=3.418919577 podStartE2EDuration="3.418919577s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:06.416960959 +0000 UTC m=+1077.084730537" watchObservedRunningTime="2026-01-21 16:05:06.418919577 +0000 UTC m=+1077.086689155" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.710234 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vscfw"] Jan 21 16:05:06 crc kubenswrapper[4760]: W0121 16:05:06.721714 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc41049e0_0ea2_4944_a23b_739987c73dce.slice/crio-352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105 WatchSource:0}: Error finding container 352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105: Status 404 returned error can't find the container with id 352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105 Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.854025 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:06 crc kubenswrapper[4760]: E0121 16:05:06.854230 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:06 crc kubenswrapper[4760]: E0121 16:05:06.854796 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:06 crc kubenswrapper[4760]: E0121 16:05:06.854893 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:08.854868517 +0000 UTC m=+1079.522638095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.882952 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.956996 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") pod \"956c0478-0da7-419e-b003-65e479971040\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.957146 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") pod \"956c0478-0da7-419e-b003-65e479971040\" (UID: \"956c0478-0da7-419e-b003-65e479971040\") " Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.958618 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "956c0478-0da7-419e-b003-65e479971040" (UID: "956c0478-0da7-419e-b003-65e479971040"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:06 crc kubenswrapper[4760]: I0121 16:05:06.967551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r" (OuterVolumeSpecName: "kube-api-access-p2g5r") pod "956c0478-0da7-419e-b003-65e479971040" (UID: "956c0478-0da7-419e-b003-65e479971040"). InnerVolumeSpecName "kube-api-access-p2g5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.018594 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 16:05:07 crc kubenswrapper[4760]: W0121 16:05:07.023971 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c45f6c_b35d_41f8_b358_afaf380d8f08.slice/crio-4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b WatchSource:0}: Error finding container 4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b: Status 404 returned error can't find the container with id 4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.058818 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/956c0478-0da7-419e-b003-65e479971040-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.058861 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2g5r\" (UniqueName: \"kubernetes.io/projected/956c0478-0da7-419e-b003-65e479971040-kube-api-access-p2g5r\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.149489 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:05:07 crc kubenswrapper[4760]: E0121 16:05:07.150068 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956c0478-0da7-419e-b003-65e479971040" containerName="mariadb-database-create" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.150091 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="956c0478-0da7-419e-b003-65e479971040" containerName="mariadb-database-create" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.150334 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="956c0478-0da7-419e-b003-65e479971040" containerName="mariadb-database-create" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.151145 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.161546 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.233073 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.234851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.246958 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.250889 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261829 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261860 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.261894 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.363873 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.364056 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.364092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.364126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.365051 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.365806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.384728 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"glance-7cb9-account-create-update-8b224\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.389141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"glance-db-create-c8jlj\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.406444 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-chqpt" event={"ID":"956c0478-0da7-419e-b003-65e479971040","Type":"ContainerDied","Data":"dcfa2b2f0f9e697f31b5c501f7104f006aeaa37c3e5ebe4b721fa5414a7ea15d"} Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.406513 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-chqpt" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.407149 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfa2b2f0f9e697f31b5c501f7104f006aeaa37c3e5ebe4b721fa5414a7ea15d" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.407921 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerStarted","Data":"352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105"} Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.409182 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"50c45f6c-b35d-41f8-b358-afaf380d8f08","Type":"ContainerStarted","Data":"4889244c517c4dfc84045159c7a370e0b4e6967b2a8db7158bbeeabdd33d9b2b"} Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.479796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:07 crc kubenswrapper[4760]: I0121 16:05:07.560874 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.421320 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f76c-account-create-update-9wpkx" event={"ID":"1213619b-eee7-4221-9083-06362fc707f5","Type":"ContainerDied","Data":"7d936c7e1f63a5b1fbcb03b390c6a84eacf0742287e4a6f22ae20db57e697726"} Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.421637 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d936c7e1f63a5b1fbcb03b390c6a84eacf0742287e4a6f22ae20db57e697726" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.423837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-995cl" event={"ID":"0df56532-7a5e-43a1-88cd-2d55f731b0f1","Type":"ContainerDied","Data":"a87e3f525b4e1f6813d35c6be9149ed1e28b3db8b0c88c0dbd185c21db488892"} Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.423887 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a87e3f525b4e1f6813d35c6be9149ed1e28b3db8b0c88c0dbd185c21db488892" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.427143 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7726-account-create-update-jdlpj" event={"ID":"3de43463-27f1-4fbe-959a-6c6446414177","Type":"ContainerDied","Data":"9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f"} Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.427175 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d1bae5e4454fe16a467bc924c1eb9fe51822476f8fbdb108196f8b68c89887f" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.450538 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.455242 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.462144 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.483921 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") pod \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484020 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") pod \"3de43463-27f1-4fbe-959a-6c6446414177\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484089 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") pod \"1213619b-eee7-4221-9083-06362fc707f5\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484126 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") pod \"1213619b-eee7-4221-9083-06362fc707f5\" (UID: \"1213619b-eee7-4221-9083-06362fc707f5\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") pod \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\" (UID: \"0df56532-7a5e-43a1-88cd-2d55f731b0f1\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.484267 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") pod \"3de43463-27f1-4fbe-959a-6c6446414177\" (UID: \"3de43463-27f1-4fbe-959a-6c6446414177\") " Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.485396 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3de43463-27f1-4fbe-959a-6c6446414177" (UID: "3de43463-27f1-4fbe-959a-6c6446414177"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.485445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0df56532-7a5e-43a1-88cd-2d55f731b0f1" (UID: "0df56532-7a5e-43a1-88cd-2d55f731b0f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.486080 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1213619b-eee7-4221-9083-06362fc707f5" (UID: "1213619b-eee7-4221-9083-06362fc707f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.495039 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn" (OuterVolumeSpecName: "kube-api-access-pqjzn") pod "3de43463-27f1-4fbe-959a-6c6446414177" (UID: "3de43463-27f1-4fbe-959a-6c6446414177"). InnerVolumeSpecName "kube-api-access-pqjzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.500872 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222" (OuterVolumeSpecName: "kube-api-access-4g222") pod "0df56532-7a5e-43a1-88cd-2d55f731b0f1" (UID: "0df56532-7a5e-43a1-88cd-2d55f731b0f1"). InnerVolumeSpecName "kube-api-access-4g222". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.502000 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9" (OuterVolumeSpecName: "kube-api-access-6scl9") pod "1213619b-eee7-4221-9083-06362fc707f5" (UID: "1213619b-eee7-4221-9083-06362fc707f5"). InnerVolumeSpecName "kube-api-access-6scl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.568856 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587357 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3de43463-27f1-4fbe-959a-6c6446414177-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587400 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g222\" (UniqueName: \"kubernetes.io/projected/0df56532-7a5e-43a1-88cd-2d55f731b0f1-kube-api-access-4g222\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587416 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqjzn\" (UniqueName: \"kubernetes.io/projected/3de43463-27f1-4fbe-959a-6c6446414177-kube-api-access-pqjzn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587432 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6scl9\" (UniqueName: \"kubernetes.io/projected/1213619b-eee7-4221-9083-06362fc707f5-kube-api-access-6scl9\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587445 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1213619b-eee7-4221-9083-06362fc707f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.587532 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0df56532-7a5e-43a1-88cd-2d55f731b0f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.597129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:05:08 crc kubenswrapper[4760]: W0121 16:05:08.602739 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1305608_194d_4c7f_b3c7_8d6925fed34f.slice/crio-962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0 WatchSource:0}: Error finding container 962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0: Status 404 returned error can't find the container with id 962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0 Jan 21 16:05:08 crc kubenswrapper[4760]: I0121 16:05:08.893403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:08 crc kubenswrapper[4760]: E0121 16:05:08.893841 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:08 crc kubenswrapper[4760]: E0121 16:05:08.893865 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:08 crc kubenswrapper[4760]: E0121 16:05:08.894358 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:12.893920592 +0000 UTC m=+1083.561690170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.134440 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:09 crc kubenswrapper[4760]: E0121 16:05:09.135480 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de43463-27f1-4fbe-959a-6c6446414177" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.135580 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de43463-27f1-4fbe-959a-6c6446414177" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: E0121 16:05:09.135667 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerName="mariadb-database-create" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.135759 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerName="mariadb-database-create" Jan 21 16:05:09 crc kubenswrapper[4760]: E0121 16:05:09.135861 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1213619b-eee7-4221-9083-06362fc707f5" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.135933 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1213619b-eee7-4221-9083-06362fc707f5" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.136207 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de43463-27f1-4fbe-959a-6c6446414177" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.136305 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1213619b-eee7-4221-9083-06362fc707f5" containerName="mariadb-account-create-update" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.136422 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" containerName="mariadb-database-create" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.137300 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.141260 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.144835 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.199731 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.199912 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.288584 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.302551 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.302657 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.303616 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.326374 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"root-account-create-update-4ghjk\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.437458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerStarted","Data":"962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0"} Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439458 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-995cl" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439522 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7726-account-create-update-jdlpj" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerStarted","Data":"d747c33a646048c45dd2a884f14d6fc41ba8476082b8916892b5d80fd9283506"} Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.439524 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f76c-account-create-update-9wpkx" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.467080 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:09 crc kubenswrapper[4760]: I0121 16:05:09.478643 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.002222 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.451287 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerStarted","Data":"7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc"} Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.453504 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerStarted","Data":"a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5"} Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.464904 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerStarted","Data":"376ea2217695f4951edd81ef92f0f246350c2850ab926b7fd4315df7b0299b4c"} Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.486070 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-c8jlj" podStartSLOduration=3.48604648 podStartE2EDuration="3.48604648s" podCreationTimestamp="2026-01-21 16:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:10.472191909 +0000 UTC m=+1081.139961487" watchObservedRunningTime="2026-01-21 16:05:10.48604648 +0000 UTC m=+1081.153816068" Jan 21 16:05:10 crc kubenswrapper[4760]: I0121 16:05:10.495369 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7cb9-account-create-update-8b224" podStartSLOduration=3.495343863 podStartE2EDuration="3.495343863s" podCreationTimestamp="2026-01-21 16:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:10.493964166 +0000 UTC m=+1081.161733754" watchObservedRunningTime="2026-01-21 16:05:10.495343863 +0000 UTC m=+1081.163113441" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.259377 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:11 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:11 crc kubenswrapper[4760]: > Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.484236 4760 generic.go:334] "Generic (PLEG): container finished" podID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerID="a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5" exitCode=0 Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.484766 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerDied","Data":"a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5"} Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.491761 4760 generic.go:334] "Generic (PLEG): container finished" podID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerID="7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc" exitCode=0 Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.491815 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerDied","Data":"7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc"} Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.719201 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.722215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.733322 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.756122 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.756184 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.812789 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.813880 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.828316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.835329 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.837522 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.841887 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.857692 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.858471 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.878611 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.893498 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"cinder-db-create-7dgzd\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.933725 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.936093 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.939575 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.948880 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959104 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959191 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959214 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959246 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.959643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.960435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:11 crc kubenswrapper[4760]: I0121 16:05:11.983795 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"barbican-4b52-account-create-update-fnb8z\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.041457 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.060902 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.060982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061803 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.061966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.079176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"cinder-faad-account-create-update-4btpg\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.079925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"barbican-db-create-884n6\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.135984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.161976 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.262452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.264417 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.265589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.272710 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.274781 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.365141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.365388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.417254 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.418461 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.442044 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.467465 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.467556 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.468385 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.485767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"neutron-b0d0-account-create-update-jg2cl\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.570535 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.570611 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.573089 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.574849 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.577431 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5vfwv" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.577470 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.577793 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.578697 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.589957 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.639577 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672451 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672663 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672712 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.672739 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.673921 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.692612 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"neutron-db-create-nfdkf\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.733066 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.774073 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.774171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.774299 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.780862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.780968 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.792873 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"keystone-db-sync-28f8s\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.902006 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:12 crc kubenswrapper[4760]: I0121 16:05:12.978898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:12 crc kubenswrapper[4760]: E0121 16:05:12.979055 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:12 crc kubenswrapper[4760]: E0121 16:05:12.979076 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:12 crc kubenswrapper[4760]: E0121 16:05:12.979140 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:20.979120491 +0000 UTC m=+1091.646890069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.463748 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.535820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7cb9-account-create-update-8b224" event={"ID":"d1305608-194d-4c7f-b3c7-8d6925fed34f","Type":"ContainerDied","Data":"962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0"} Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.535909 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="962d0a240fa343388b8905ae749295a3fe8b79838542c887cc70346b700337c0" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.538555 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c8jlj" event={"ID":"7a391de4-6ff8-49ac-93cb-98b98202f3f1","Type":"ContainerDied","Data":"d747c33a646048c45dd2a884f14d6fc41ba8476082b8916892b5d80fd9283506"} Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.538703 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d747c33a646048c45dd2a884f14d6fc41ba8476082b8916892b5d80fd9283506" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.547000 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.568100 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691456 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") pod \"d1305608-194d-4c7f-b3c7-8d6925fed34f\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") pod \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691661 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") pod \"d1305608-194d-4c7f-b3c7-8d6925fed34f\" (UID: \"d1305608-194d-4c7f-b3c7-8d6925fed34f\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.691696 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") pod \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\" (UID: \"7a391de4-6ff8-49ac-93cb-98b98202f3f1\") " Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.693098 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1305608-194d-4c7f-b3c7-8d6925fed34f" (UID: "d1305608-194d-4c7f-b3c7-8d6925fed34f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.693206 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a391de4-6ff8-49ac-93cb-98b98202f3f1" (UID: "7a391de4-6ff8-49ac-93cb-98b98202f3f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.694264 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1305608-194d-4c7f-b3c7-8d6925fed34f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.694303 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a391de4-6ff8-49ac-93cb-98b98202f3f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.704686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp" (OuterVolumeSpecName: "kube-api-access-7fbpp") pod "7a391de4-6ff8-49ac-93cb-98b98202f3f1" (UID: "7a391de4-6ff8-49ac-93cb-98b98202f3f1"). InnerVolumeSpecName "kube-api-access-7fbpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.707241 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p" (OuterVolumeSpecName: "kube-api-access-l959p") pod "d1305608-194d-4c7f-b3c7-8d6925fed34f" (UID: "d1305608-194d-4c7f-b3c7-8d6925fed34f"). InnerVolumeSpecName "kube-api-access-l959p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.796532 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l959p\" (UniqueName: \"kubernetes.io/projected/d1305608-194d-4c7f-b3c7-8d6925fed34f-kube-api-access-l959p\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:13 crc kubenswrapper[4760]: I0121 16:05:13.796640 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fbpp\" (UniqueName: \"kubernetes.io/projected/7a391de4-6ff8-49ac-93cb-98b98202f3f1-kube-api-access-7fbpp\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.085928 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.231603 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:05:14 crc kubenswrapper[4760]: W0121 16:05:14.236705 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85bbee56_6cf4_4653_b69f_59b68063b3a1.slice/crio-0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1 WatchSource:0}: Error finding container 0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1: Status 404 returned error can't find the container with id 0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1 Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.399970 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.400303 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vpn5h" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" containerID="cri-o://2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1" gracePeriod=10 Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.568288 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerStarted","Data":"0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1"} Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.581560 4760 generic.go:334] "Generic (PLEG): container finished" podID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerID="2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1" exitCode=0 Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.581708 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c8jlj" Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.583433 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerDied","Data":"2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1"} Jan 21 16:05:14 crc kubenswrapper[4760]: I0121 16:05:14.583645 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7cb9-account-create-update-8b224" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.084639 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-vpn5h" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.103813 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.264061 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.296702 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.303819 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:05:15 crc kubenswrapper[4760]: W0121 16:05:15.323218 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba247535_e91f_47de_a9c2_0ce8e91f8d23.slice/crio-b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429 WatchSource:0}: Error finding container b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429: Status 404 returned error can't find the container with id b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429 Jan 21 16:05:15 crc kubenswrapper[4760]: W0121 16:05:15.324212 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddbef96c_1bfa_412a_a49d_460b6f6d90f9.slice/crio-d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee WatchSource:0}: Error finding container d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee: Status 404 returned error can't find the container with id d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.382822 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.451133 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.451210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.451260 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.452973 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.453065 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") pod \"662b2f90-4ca1-4670-9b55-57a691e191ff\" (UID: \"662b2f90-4ca1-4670-9b55-57a691e191ff\") " Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.486499 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf" (OuterVolumeSpecName: "kube-api-access-wn7pf") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "kube-api-access-wn7pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.520554 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.534914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.557499 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.559187 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7pf\" (UniqueName: \"kubernetes.io/projected/662b2f90-4ca1-4670-9b55-57a691e191ff-kube-api-access-wn7pf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.602737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerStarted","Data":"3e531d4f9c650fd6c16f60d1a5dc9df3fc673bb01d255423411a3c6bcb6f7c14"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.605418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerStarted","Data":"db012bd0f041361a4d3a4fe4c5290ccdc01a62c7f9dfe1012638646f99a67f37"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.608446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerStarted","Data":"d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.611168 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerStarted","Data":"f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.615721 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerStarted","Data":"7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.617559 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-faad-account-create-update-4btpg" event={"ID":"988b2688-7981-4093-a1d2-45796fb69f52","Type":"ContainerStarted","Data":"d9249a9682406f0ed27772f230535b22120d8aa6105ed99bfd541064c467fb18"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.619158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vpn5h" event={"ID":"662b2f90-4ca1-4670-9b55-57a691e191ff","Type":"ContainerDied","Data":"48c80916c91e39397ff5a93ea5bc1cf8687a4f0ad22dad533560450611beba05"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.619191 4760 scope.go:117] "RemoveContainer" containerID="2d551787711d5acc1cce8ad29717be0481e72a6277cf19787efaef4058b7d0b1" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.619280 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vpn5h" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.638915 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b0d0-account-create-update-jg2cl" podStartSLOduration=3.63887955 podStartE2EDuration="3.63887955s" podCreationTimestamp="2026-01-21 16:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:15.635057325 +0000 UTC m=+1086.302826903" watchObservedRunningTime="2026-01-21 16:05:15.63887955 +0000 UTC m=+1086.306649128" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.647201 4760 scope.go:117] "RemoveContainer" containerID="8317b3fdc217b7dd117467332217df1262840073d360445fc7d8e24e5aa0880b" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.651171 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerStarted","Data":"a44eceff90e62e30a52d0e1163f8404bfc13d78e3229cc13ac35fa7fd7798f1d"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.651213 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerStarted","Data":"eaad9eaea810cc06481da534223ae7e3a6e10e8dfdb292752003e6072fd72b4a"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.651226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"50c45f6c-b35d-41f8-b358-afaf380d8f08","Type":"ContainerStarted","Data":"79892c0c21ecb1f7f774cd2b93c344cc07d5fef2b7a4f022df5ace3dd54cb224"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.656807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerStarted","Data":"b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429"} Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.660759 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4ghjk" podStartSLOduration=6.660730898 podStartE2EDuration="6.660730898s" podCreationTimestamp="2026-01-21 16:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:15.651556768 +0000 UTC m=+1086.319326346" watchObservedRunningTime="2026-01-21 16:05:15.660730898 +0000 UTC m=+1086.328500476" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.722686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.728424 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.736107 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config" (OuterVolumeSpecName: "config") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.739110 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "662b2f90-4ca1-4670-9b55-57a691e191ff" (UID: "662b2f90-4ca1-4670-9b55-57a691e191ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764441 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764847 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764861 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.764872 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/662b2f90-4ca1-4670-9b55-57a691e191ff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.986519 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-vscfw" podStartSLOduration=3.362434528 podStartE2EDuration="10.986479128s" podCreationTimestamp="2026-01-21 16:05:05 +0000 UTC" firstStartedPulling="2026-01-21 16:05:06.724586932 +0000 UTC m=+1077.392356510" lastFinishedPulling="2026-01-21 16:05:14.348631532 +0000 UTC m=+1085.016401110" observedRunningTime="2026-01-21 16:05:15.684006355 +0000 UTC m=+1086.351775943" watchObservedRunningTime="2026-01-21 16:05:15.986479128 +0000 UTC m=+1086.654248706" Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.990805 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:05:15 crc kubenswrapper[4760]: I0121 16:05:15.997596 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vpn5h"] Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.250472 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:16 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:16 crc kubenswrapper[4760]: > Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.674961 4760 generic.go:334] "Generic (PLEG): container finished" podID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerID="f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19" exitCode=0 Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.675648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerDied","Data":"f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.693314 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerStarted","Data":"fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.718597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerStarted","Data":"d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.721593 4760 generic.go:334] "Generic (PLEG): container finished" podID="148dd39f-7ece-4735-ade5-103446b56147" containerID="7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c" exitCode=0 Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.721949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerDied","Data":"7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.726453 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-nfdkf" podStartSLOduration=4.726434412 podStartE2EDuration="4.726434412s" podCreationTimestamp="2026-01-21 16:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:16.723150647 +0000 UTC m=+1087.390920235" watchObservedRunningTime="2026-01-21 16:05:16.726434412 +0000 UTC m=+1087.394203990" Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.726615 4760 generic.go:334] "Generic (PLEG): container finished" podID="988b2688-7981-4093-a1d2-45796fb69f52" containerID="90db8f63e9a72921008b460ecbc6a78ebe203277ffd590f54ada4404d5230b48" exitCode=0 Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.726728 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-faad-account-create-update-4btpg" event={"ID":"988b2688-7981-4093-a1d2-45796fb69f52","Type":"ContainerDied","Data":"90db8f63e9a72921008b460ecbc6a78ebe203277ffd590f54ada4404d5230b48"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.728600 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerStarted","Data":"076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.731825 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerStarted","Data":"0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.735933 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"50c45f6c-b35d-41f8-b358-afaf380d8f08","Type":"ContainerStarted","Data":"f29c8880d7ac46bedd7035586036c390fd0579a19b5d5160cce934a034c3cf07"} Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.736005 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 16:05:16 crc kubenswrapper[4760]: I0121 16:05:16.866786 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.331955448 podStartE2EDuration="11.866761514s" podCreationTimestamp="2026-01-21 16:05:05 +0000 UTC" firstStartedPulling="2026-01-21 16:05:07.026848691 +0000 UTC m=+1077.694618279" lastFinishedPulling="2026-01-21 16:05:13.561654767 +0000 UTC m=+1084.229424345" observedRunningTime="2026-01-21 16:05:16.856924271 +0000 UTC m=+1087.524693849" watchObservedRunningTime="2026-01-21 16:05:16.866761514 +0000 UTC m=+1087.534531092" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.635352 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" path="/var/lib/kubelet/pods/662b2f90-4ca1-4670-9b55-57a691e191ff/volumes" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.774992 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776077 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerName="mariadb-account-create-update" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776100 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerName="mariadb-account-create-update" Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776112 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776119 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776134 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerName="mariadb-database-create" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776142 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerName="mariadb-database-create" Jan 21 16:05:17 crc kubenswrapper[4760]: E0121 16:05:17.776167 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="init" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776175 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="init" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776424 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="662b2f90-4ca1-4670-9b55-57a691e191ff" containerName="dnsmasq-dns" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776451 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" containerName="mariadb-database-create" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.776466 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" containerName="mariadb-account-create-update" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.777190 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.782683 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.782747 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2lr4r" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.790622 4760 generic.go:334] "Generic (PLEG): container finished" podID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerID="fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.790810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerDied","Data":"fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.793236 4760 generic.go:334] "Generic (PLEG): container finished" podID="5f90ad69-2b58-48f1-a605-63486d38956f" containerID="076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.793492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerDied","Data":"076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.797080 4760 generic.go:334] "Generic (PLEG): container finished" podID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerID="0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.797158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerDied","Data":"0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.797195 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805295 4760 generic.go:334] "Generic (PLEG): container finished" podID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerID="d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6" exitCode=0 Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerDied","Data":"d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6"} Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805700 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805743 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805775 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:17 crc kubenswrapper[4760]: I0121 16:05:17.805799 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.000710 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.010756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.016882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.040143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.042031 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"glance-db-sync-nf7wp\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:18 crc kubenswrapper[4760]: I0121 16:05:18.113060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.120031 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jfrjn" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.330733 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.332778 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.336778 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.338722 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339537 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339702 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339817 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.339991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.340115 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440844 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440928 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.440962 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441082 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441384 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441546 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.441708 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.442275 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.443397 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.479259 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"ovn-controller-ltr79-config-vjnxt\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:20 crc kubenswrapper[4760]: I0121 16:05:20.665540 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:21 crc kubenswrapper[4760]: I0121 16:05:21.051232 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:21 crc kubenswrapper[4760]: E0121 16:05:21.051657 4760 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 16:05:21 crc kubenswrapper[4760]: E0121 16:05:21.051699 4760 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 16:05:21 crc kubenswrapper[4760]: E0121 16:05:21.051792 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift podName:d1ccc2ed-d1e8-4b84-807d-55d70e8def12 nodeName:}" failed. No retries permitted until 2026-01-21 16:05:37.051759769 +0000 UTC m=+1107.719529357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift") pod "swift-storage-0" (UID: "d1ccc2ed-d1e8-4b84-807d-55d70e8def12") : configmap "swift-ring-files" not found Jan 21 16:05:21 crc kubenswrapper[4760]: I0121 16:05:21.245604 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ltr79" podUID="c17cd40e-6e7b-4c1e-9ca8-e6edc1248330" containerName="ovn-controller" probeResult="failure" output=< Jan 21 16:05:21 crc kubenswrapper[4760]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 21 16:05:21 crc kubenswrapper[4760]: > Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.585208 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.602865 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.611083 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.619582 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.648038 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.656489 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.692379 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785453 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") pod \"85bbee56-6cf4-4653-b69f-59b68063b3a1\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") pod \"148dd39f-7ece-4735-ade5-103446b56147\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") pod \"85bbee56-6cf4-4653-b69f-59b68063b3a1\" (UID: \"85bbee56-6cf4-4653-b69f-59b68063b3a1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785675 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") pod \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785704 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") pod \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785745 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") pod \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\" (UID: \"ba247535-e91f-47de-a9c2-0ce8e91f8d23\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785778 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") pod \"988b2688-7981-4093-a1d2-45796fb69f52\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785799 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") pod \"c29c2669-d63b-4ac4-8680-fc14ced158f1\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785825 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") pod \"c29c2669-d63b-4ac4-8680-fc14ced158f1\" (UID: \"c29c2669-d63b-4ac4-8680-fc14ced158f1\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785859 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") pod \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\" (UID: \"ddbef96c-1bfa-412a-a49d-460b6f6d90f9\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") pod \"988b2688-7981-4093-a1d2-45796fb69f52\" (UID: \"988b2688-7981-4093-a1d2-45796fb69f52\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.785963 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") pod \"148dd39f-7ece-4735-ade5-103446b56147\" (UID: \"148dd39f-7ece-4735-ade5-103446b56147\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.786023 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85bbee56-6cf4-4653-b69f-59b68063b3a1" (UID: "85bbee56-6cf4-4653-b69f-59b68063b3a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.786952 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85bbee56-6cf4-4653-b69f-59b68063b3a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.790574 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c29c2669-d63b-4ac4-8680-fc14ced158f1" (UID: "c29c2669-d63b-4ac4-8680-fc14ced158f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.790747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "148dd39f-7ece-4735-ade5-103446b56147" (UID: "148dd39f-7ece-4735-ade5-103446b56147"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.790747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "988b2688-7981-4093-a1d2-45796fb69f52" (UID: "988b2688-7981-4093-a1d2-45796fb69f52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.791131 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ddbef96c-1bfa-412a-a49d-460b6f6d90f9" (UID: "ddbef96c-1bfa-412a-a49d-460b6f6d90f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.791167 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba247535-e91f-47de-a9c2-0ce8e91f8d23" (UID: "ba247535-e91f-47de-a9c2-0ce8e91f8d23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.794048 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z" (OuterVolumeSpecName: "kube-api-access-4zm4z") pod "c29c2669-d63b-4ac4-8680-fc14ced158f1" (UID: "c29c2669-d63b-4ac4-8680-fc14ced158f1"). InnerVolumeSpecName "kube-api-access-4zm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.794176 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh" (OuterVolumeSpecName: "kube-api-access-frnlh") pod "988b2688-7981-4093-a1d2-45796fb69f52" (UID: "988b2688-7981-4093-a1d2-45796fb69f52"). InnerVolumeSpecName "kube-api-access-frnlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.794671 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc" (OuterVolumeSpecName: "kube-api-access-kdrgc") pod "85bbee56-6cf4-4653-b69f-59b68063b3a1" (UID: "85bbee56-6cf4-4653-b69f-59b68063b3a1"). InnerVolumeSpecName "kube-api-access-kdrgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.796829 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp" (OuterVolumeSpecName: "kube-api-access-z55sp") pod "148dd39f-7ece-4735-ade5-103446b56147" (UID: "148dd39f-7ece-4735-ade5-103446b56147"). InnerVolumeSpecName "kube-api-access-z55sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.802717 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d" (OuterVolumeSpecName: "kube-api-access-gsr6d") pod "ba247535-e91f-47de-a9c2-0ce8e91f8d23" (UID: "ba247535-e91f-47de-a9c2-0ce8e91f8d23"). InnerVolumeSpecName "kube-api-access-gsr6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.805074 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf" (OuterVolumeSpecName: "kube-api-access-hplvf") pod "ddbef96c-1bfa-412a-a49d-460b6f6d90f9" (UID: "ddbef96c-1bfa-412a-a49d-460b6f6d90f9"). InnerVolumeSpecName "kube-api-access-hplvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.856707 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b0d0-account-create-update-jg2cl" event={"ID":"85bbee56-6cf4-4653-b69f-59b68063b3a1","Type":"ContainerDied","Data":"0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.856777 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b19a525fe07a8ec655096d816396d8ad482448d1de5bf7df56b6b4078e80af1" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.856732 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b0d0-account-create-update-jg2cl" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.863090 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-884n6" event={"ID":"5f90ad69-2b58-48f1-a605-63486d38956f","Type":"ContainerDied","Data":"3e531d4f9c650fd6c16f60d1a5dc9df3fc673bb01d255423411a3c6bcb6f7c14"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.863139 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e531d4f9c650fd6c16f60d1a5dc9df3fc673bb01d255423411a3c6bcb6f7c14" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.863477 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-884n6" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.865576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4b52-account-create-update-fnb8z" event={"ID":"c29c2669-d63b-4ac4-8680-fc14ced158f1","Type":"ContainerDied","Data":"db012bd0f041361a4d3a4fe4c5290ccdc01a62c7f9dfe1012638646f99a67f37"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.865618 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db012bd0f041361a4d3a4fe4c5290ccdc01a62c7f9dfe1012638646f99a67f37" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.865574 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4b52-account-create-update-fnb8z" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.867529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-nfdkf" event={"ID":"ba247535-e91f-47de-a9c2-0ce8e91f8d23","Type":"ContainerDied","Data":"b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.867556 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21ff1b9d58bbbecd4609101968febbc6809924558358397be858279389fd429" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.867617 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-nfdkf" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.874272 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4ghjk" event={"ID":"148dd39f-7ece-4735-ade5-103446b56147","Type":"ContainerDied","Data":"376ea2217695f4951edd81ef92f0f246350c2850ab926b7fd4315df7b0299b4c"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.874305 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376ea2217695f4951edd81ef92f0f246350c2850ab926b7fd4315df7b0299b4c" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.874311 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4ghjk" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.876221 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-faad-account-create-update-4btpg" event={"ID":"988b2688-7981-4093-a1d2-45796fb69f52","Type":"ContainerDied","Data":"d9249a9682406f0ed27772f230535b22120d8aa6105ed99bfd541064c467fb18"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.876283 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9249a9682406f0ed27772f230535b22120d8aa6105ed99bfd541064c467fb18" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.876390 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-faad-account-create-update-4btpg" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.880159 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7dgzd" event={"ID":"ddbef96c-1bfa-412a-a49d-460b6f6d90f9","Type":"ContainerDied","Data":"d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee"} Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.880185 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24633c590c02b9baf90fa017a2b139524d4afbb7dc5bf736c5d2240c5b08aee" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.880302 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7dgzd" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.892467 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") pod \"5f90ad69-2b58-48f1-a605-63486d38956f\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.892594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") pod \"5f90ad69-2b58-48f1-a605-63486d38956f\" (UID: \"5f90ad69-2b58-48f1-a605-63486d38956f\") " Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893163 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsr6d\" (UniqueName: \"kubernetes.io/projected/ba247535-e91f-47de-a9c2-0ce8e91f8d23-kube-api-access-gsr6d\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893180 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frnlh\" (UniqueName: \"kubernetes.io/projected/988b2688-7981-4093-a1d2-45796fb69f52-kube-api-access-frnlh\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893193 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c29c2669-d63b-4ac4-8680-fc14ced158f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893212 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zm4z\" (UniqueName: \"kubernetes.io/projected/c29c2669-d63b-4ac4-8680-fc14ced158f1-kube-api-access-4zm4z\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893226 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893238 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/988b2688-7981-4093-a1d2-45796fb69f52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893250 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z55sp\" (UniqueName: \"kubernetes.io/projected/148dd39f-7ece-4735-ade5-103446b56147-kube-api-access-z55sp\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894723 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5f90ad69-2b58-48f1-a605-63486d38956f" (UID: "5f90ad69-2b58-48f1-a605-63486d38956f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.893261 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/148dd39f-7ece-4735-ade5-103446b56147-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894868 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdrgc\" (UniqueName: \"kubernetes.io/projected/85bbee56-6cf4-4653-b69f-59b68063b3a1-kube-api-access-kdrgc\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894885 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hplvf\" (UniqueName: \"kubernetes.io/projected/ddbef96c-1bfa-412a-a49d-460b6f6d90f9-kube-api-access-hplvf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.894899 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba247535-e91f-47de-a9c2-0ce8e91f8d23-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.900044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw" (OuterVolumeSpecName: "kube-api-access-xxxvw") pod "5f90ad69-2b58-48f1-a605-63486d38956f" (UID: "5f90ad69-2b58-48f1-a605-63486d38956f"). InnerVolumeSpecName "kube-api-access-xxxvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.924175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.997038 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f90ad69-2b58-48f1-a605-63486d38956f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:22 crc kubenswrapper[4760]: I0121 16:05:22.997095 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxxvw\" (UniqueName: \"kubernetes.io/projected/5f90ad69-2b58-48f1-a605-63486d38956f-kube-api-access-xxxvw\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.183616 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:05:23 crc kubenswrapper[4760]: W0121 16:05:23.211311 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4fdfaae_d8ad_46d6_b30a_1b671408ca51.slice/crio-3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab WatchSource:0}: Error finding container 3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab: Status 404 returned error can't find the container with id 3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.894271 4760 generic.go:334] "Generic (PLEG): container finished" podID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerID="2d34bfbb1e9562044a28e1b8f99e51d17272859240cd9c059be93073a5a4cbd7" exitCode=0 Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.895050 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-vjnxt" event={"ID":"195dc125-dc26-4068-93b4-e5fca1c7d37d","Type":"ContainerDied","Data":"2d34bfbb1e9562044a28e1b8f99e51d17272859240cd9c059be93073a5a4cbd7"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.895189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-vjnxt" event={"ID":"195dc125-dc26-4068-93b4-e5fca1c7d37d","Type":"ContainerStarted","Data":"af5666b1522a37dcec560ae403704fd2fd19ae33ebaada5640d6bdad9692091d"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.898723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerStarted","Data":"3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.907560 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerStarted","Data":"16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a"} Jan 21 16:05:23 crc kubenswrapper[4760]: I0121 16:05:23.943836 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-28f8s" podStartSLOduration=3.940688559 podStartE2EDuration="11.943806785s" podCreationTimestamp="2026-01-21 16:05:12 +0000 UTC" firstStartedPulling="2026-01-21 16:05:15.540844847 +0000 UTC m=+1086.208614425" lastFinishedPulling="2026-01-21 16:05:23.543963073 +0000 UTC m=+1094.211732651" observedRunningTime="2026-01-21 16:05:23.932393762 +0000 UTC m=+1094.600163340" watchObservedRunningTime="2026-01-21 16:05:23.943806785 +0000 UTC m=+1094.611576363" Jan 21 16:05:24 crc kubenswrapper[4760]: I0121 16:05:24.917461 4760 generic.go:334] "Generic (PLEG): container finished" podID="c41049e0-0ea2-4944-a23b-739987c73dce" containerID="a44eceff90e62e30a52d0e1163f8404bfc13d78e3229cc13ac35fa7fd7798f1d" exitCode=0 Jan 21 16:05:24 crc kubenswrapper[4760]: I0121 16:05:24.917567 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerDied","Data":"a44eceff90e62e30a52d0e1163f8404bfc13d78e3229cc13ac35fa7fd7798f1d"} Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.313140 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356668 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356701 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.356786 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.357184 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run" (OuterVolumeSpecName: "var-run") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.357243 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.357259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.363043 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.364257 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts" (OuterVolumeSpecName: "scripts") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") pod \"195dc125-dc26-4068-93b4-e5fca1c7d37d\" (UID: \"195dc125-dc26-4068-93b4-e5fca1c7d37d\") " Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458600 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458620 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458631 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458642 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/195dc125-dc26-4068-93b4-e5fca1c7d37d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.458651 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/195dc125-dc26-4068-93b4-e5fca1c7d37d-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.480524 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd" (OuterVolumeSpecName: "kube-api-access-vpcsd") pod "195dc125-dc26-4068-93b4-e5fca1c7d37d" (UID: "195dc125-dc26-4068-93b4-e5fca1c7d37d"). InnerVolumeSpecName "kube-api-access-vpcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.615262 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.623240 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4ghjk"] Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.661847 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpcsd\" (UniqueName: \"kubernetes.io/projected/195dc125-dc26-4068-93b4-e5fca1c7d37d-kube-api-access-vpcsd\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.675657 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148dd39f-7ece-4735-ade5-103446b56147" path="/var/lib/kubelet/pods/148dd39f-7ece-4735-ade5-103446b56147/volumes" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.997067 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-vjnxt" event={"ID":"195dc125-dc26-4068-93b4-e5fca1c7d37d","Type":"ContainerDied","Data":"af5666b1522a37dcec560ae403704fd2fd19ae33ebaada5640d6bdad9692091d"} Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.997116 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-vjnxt" Jan 21 16:05:25 crc kubenswrapper[4760]: I0121 16:05:25.997119 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5666b1522a37dcec560ae403704fd2fd19ae33ebaada5640d6bdad9692091d" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.152475 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.269256 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ltr79" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.414271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.454396 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.475185 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ltr79-config-vjnxt"] Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.508391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.508752 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.508894 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509037 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509250 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509403 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.509532 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") pod \"c41049e0-0ea2-4944-a23b-739987c73dce\" (UID: \"c41049e0-0ea2-4944-a23b-739987c73dce\") " Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.510020 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.510954 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.518407 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6" (OuterVolumeSpecName: "kube-api-access-jjcc6") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "kube-api-access-jjcc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.521848 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.551489 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts" (OuterVolumeSpecName: "scripts") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.559874 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.571995 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c41049e0-0ea2-4944-a23b-739987c73dce" (UID: "c41049e0-0ea2-4944-a23b-739987c73dce"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644418 4760 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644459 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644468 4760 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c41049e0-0ea2-4944-a23b-739987c73dce-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644479 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjcc6\" (UniqueName: \"kubernetes.io/projected/c41049e0-0ea2-4944-a23b-739987c73dce-kube-api-access-jjcc6\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644489 4760 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644499 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41049e0-0ea2-4944-a23b-739987c73dce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.644509 4760 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c41049e0-0ea2-4944-a23b-739987c73dce-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.700898 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701244 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701262 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701273 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41049e0-0ea2-4944-a23b-739987c73dce" containerName="swift-ring-rebalance" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701280 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41049e0-0ea2-4944-a23b-739987c73dce" containerName="swift-ring-rebalance" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701292 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988b2688-7981-4093-a1d2-45796fb69f52" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701299 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="988b2688-7981-4093-a1d2-45796fb69f52" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701311 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerName="ovn-config" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerName="ovn-config" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701412 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148dd39f-7ece-4735-ade5-103446b56147" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701419 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="148dd39f-7ece-4735-ade5-103446b56147" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701435 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701442 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701455 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701461 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701471 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701477 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: E0121 16:05:26.701491 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701499 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701652 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701665 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41049e0-0ea2-4944-a23b-739987c73dce" containerName="swift-ring-rebalance" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701673 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="148dd39f-7ece-4735-ade5-103446b56147" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701682 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" containerName="ovn-config" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701691 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701706 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701717 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701727 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="988b2688-7981-4093-a1d2-45796fb69f52" containerName="mariadb-account-create-update" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.701735 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" containerName="mariadb-database-create" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.702305 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.705215 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.708704 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.850520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.850808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.850867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.851015 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.851141 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.851203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952391 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952456 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952498 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952522 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.952592 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.953197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.953250 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.953255 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.954009 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.955205 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:26 crc kubenswrapper[4760]: I0121 16:05:26.973969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"ovn-controller-ltr79-config-zwhhs\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.009292 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-vscfw" event={"ID":"c41049e0-0ea2-4944-a23b-739987c73dce","Type":"ContainerDied","Data":"352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105"} Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.009356 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352d476e436c549bbc8e9f8fb65684cf9e4c430f501083a7cf6695565c71a105" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.009398 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vscfw" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.027689 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.370732 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:27 crc kubenswrapper[4760]: I0121 16:05:27.633161 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195dc125-dc26-4068-93b4-e5fca1c7d37d" path="/var/lib/kubelet/pods/195dc125-dc26-4068-93b4-e5fca1c7d37d/volumes" Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.019163 4760 generic.go:334] "Generic (PLEG): container finished" podID="5977817a-76bd-4df7-b942-4553334f046c" containerID="16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a" exitCode=0 Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.019259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerDied","Data":"16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a"} Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.022166 4760 generic.go:334] "Generic (PLEG): container finished" podID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerID="34038f4c7fac9f938c55ed43e5c32a1fe9257ccbfb52b4dbf532309cae01868b" exitCode=0 Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.022208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-zwhhs" event={"ID":"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc","Type":"ContainerDied","Data":"34038f4c7fac9f938c55ed43e5c32a1fe9257ccbfb52b4dbf532309cae01868b"} Jan 21 16:05:28 crc kubenswrapper[4760]: I0121 16:05:28.022237 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-zwhhs" event={"ID":"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc","Type":"ContainerStarted","Data":"ce61e281854dbfe484805a387d2b3b623111bc19e89185d93a910c47d943bb25"} Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.636585 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.638764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.642663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.647531 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.738363 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.738428 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.840209 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.840292 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.841143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.864362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"root-account-create-update-tjb9t\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:30 crc kubenswrapper[4760]: I0121 16:05:30.966410 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:37 crc kubenswrapper[4760]: I0121 16:05:37.149772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:37 crc kubenswrapper[4760]: I0121 16:05:37.161499 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d1ccc2ed-d1e8-4b84-807d-55d70e8def12-etc-swift\") pod \"swift-storage-0\" (UID: \"d1ccc2ed-d1e8-4b84-807d-55d70e8def12\") " pod="openstack/swift-storage-0" Jan 21 16:05:37 crc kubenswrapper[4760]: I0121 16:05:37.454978 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.217420 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.218030 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdqk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-nf7wp_openstack(c4fdfaae-d8ad-46d6-b30a-1b671408ca51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.219755 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-nf7wp" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.251376 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-28f8s" event={"ID":"5977817a-76bd-4df7-b942-4553334f046c","Type":"ContainerDied","Data":"eaad9eaea810cc06481da534223ae7e3a6e10e8dfdb292752003e6072fd72b4a"} Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.251431 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaad9eaea810cc06481da534223ae7e3a6e10e8dfdb292752003e6072fd72b4a" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.262268 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ltr79-config-zwhhs" event={"ID":"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc","Type":"ContainerDied","Data":"ce61e281854dbfe484805a387d2b3b623111bc19e89185d93a910c47d943bb25"} Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.262359 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce61e281854dbfe484805a387d2b3b623111bc19e89185d93a910c47d943bb25" Jan 21 16:05:50 crc kubenswrapper[4760]: E0121 16:05:50.264235 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-nf7wp" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.805076 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:50 crc kubenswrapper[4760]: I0121 16:05:50.806066 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038737 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038818 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") pod \"5977817a-76bd-4df7-b942-4553334f046c\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038873 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") pod \"5977817a-76bd-4df7-b942-4553334f046c\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038904 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038932 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038946 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.038994 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run" (OuterVolumeSpecName: "var-run") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039043 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039073 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") pod \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\" (UID: \"9d49e47d-fcb6-40d9-9312-8d4e16d28ccc\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039105 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") pod \"5977817a-76bd-4df7-b942-4553334f046c\" (UID: \"5977817a-76bd-4df7-b942-4553334f046c\") " Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.039351 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.040728 4760 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.040757 4760 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.040769 4760 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.044482 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.044648 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts" (OuterVolumeSpecName: "scripts") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.075360 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877" (OuterVolumeSpecName: "kube-api-access-k6877") pod "5977817a-76bd-4df7-b942-4553334f046c" (UID: "5977817a-76bd-4df7-b942-4553334f046c"). InnerVolumeSpecName "kube-api-access-k6877". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.082861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4" (OuterVolumeSpecName: "kube-api-access-2v9r4") pod "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" (UID: "9d49e47d-fcb6-40d9-9312-8d4e16d28ccc"). InnerVolumeSpecName "kube-api-access-2v9r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143411 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143446 4760 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143458 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6877\" (UniqueName: \"kubernetes.io/projected/5977817a-76bd-4df7-b942-4553334f046c-kube-api-access-k6877\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.143468 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v9r4\" (UniqueName: \"kubernetes.io/projected/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc-kube-api-access-2v9r4\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.162663 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5977817a-76bd-4df7-b942-4553334f046c" (UID: "5977817a-76bd-4df7-b942-4553334f046c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.215593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data" (OuterVolumeSpecName: "config-data") pod "5977817a-76bd-4df7-b942-4553334f046c" (UID: "5977817a-76bd-4df7-b942-4553334f046c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.244755 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.244779 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5977817a-76bd-4df7-b942-4553334f046c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.323020 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-28f8s" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.330892 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ltr79-config-zwhhs" Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.534209 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:05:51 crc kubenswrapper[4760]: I0121 16:05:51.766641 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 16:05:51 crc kubenswrapper[4760]: W0121 16:05:51.775539 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1ccc2ed_d1e8_4b84_807d_55d70e8def12.slice/crio-a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5 WatchSource:0}: Error finding container a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5: Status 404 returned error can't find the container with id a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5 Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.285227 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.292145 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ltr79-config-zwhhs"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.374499 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerID="81025c437bf683bd16828e1e94a515f5499117166adbfe37c74158127779092b" exitCode=0 Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.374578 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tjb9t" event={"ID":"6d4e60fd-bb4c-4460-87db-729dac85afbc","Type":"ContainerDied","Data":"81025c437bf683bd16828e1e94a515f5499117166adbfe37c74158127779092b"} Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.374610 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tjb9t" event={"ID":"6d4e60fd-bb4c-4460-87db-729dac85afbc","Type":"ContainerStarted","Data":"263587be6517e0d1cb792001b033558890df5477033a9df0c0e6071cd7b42b91"} Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.376198 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"a8d8167a0cae64232fe9cef92b46021a23047f46662cd7619b6287d824c3fee5"} Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679083 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:52 crc kubenswrapper[4760]: E0121 16:05:52.679582 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerName="ovn-config" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679616 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerName="ovn-config" Jan 21 16:05:52 crc kubenswrapper[4760]: E0121 16:05:52.679631 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5977817a-76bd-4df7-b942-4553334f046c" containerName="keystone-db-sync" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679637 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5977817a-76bd-4df7-b942-4553334f046c" containerName="keystone-db-sync" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679827 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5977817a-76bd-4df7-b942-4553334f046c" containerName="keystone-db-sync" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.679848 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" containerName="ovn-config" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.680811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.701825 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.703395 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712080 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712282 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5vfwv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712522 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712693 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.712848 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.729205 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.752303 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.793272 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.793941 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.794275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.794508 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.794665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.854005 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.862818 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.867974 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.868437 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.868695 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.869547 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-58rtq" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897001 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897121 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897198 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897271 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897292 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897506 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897538 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897563 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897593 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897620 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.897685 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.898819 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.899569 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.899818 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.900603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.900704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.921006 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.922416 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.935191 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"dnsmasq-dns-5c9d85d47c-47vt9\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.936185 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.936400 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.936552 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l8crm" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.992758 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998905 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998968 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.998993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999026 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999076 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999127 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999158 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:52 crc kubenswrapper[4760]: I0121 16:05:52.999245 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.008623 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.009369 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.011875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.014236 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.016063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.022134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.022739 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.023269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.035371 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"horizon-6f6cd74fcc-kvhmv\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.039412 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.053006 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"keystone-bootstrap-jxk64\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.058623 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.059853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.061794 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.073077 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.073478 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4pz29" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.073703 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.075214 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.085113 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101557 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101708 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101745 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.101773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.107705 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.109003 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.115449 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.115703 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5wvcs" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.136233 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.137298 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.140716 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.148010 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jlmf6" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.148049 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.513514 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.518459 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526089 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526164 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526278 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526359 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526419 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.521239 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526631 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526761 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526898 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.526961 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527008 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527145 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527242 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.527273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.534299 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.534866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.553516 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.562403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.562855 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.617478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.648197 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"cinder-db-sync-j76bd\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.653257 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.672944 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673069 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673157 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673208 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673264 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.673543 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.681200 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.688649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689523 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689632 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689764 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.689859 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.708188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.708391 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.710029 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.713632 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.714521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.723977 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d49e47d-fcb6-40d9-9312-8d4e16d28ccc" path="/var/lib/kubelet/pods/9d49e47d-fcb6-40d9-9312-8d4e16d28ccc/volumes" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.725216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.730027 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.732383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.734844 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"neutron-db-sync-tp55g\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.739349 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.746531 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.748796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.765350 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.776730 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"placement-db-sync-65fzw\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.777516 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"barbican-db-sync-pgvwf\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.785501 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.787723 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.792797 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.797951 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.802690 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803608 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803787 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803840 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803934 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.803954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.804045 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.810155 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905615 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905732 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905774 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905803 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905823 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.905993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.906908 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.907390 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.907763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.909182 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:53 crc kubenswrapper[4760]: I0121 16:05:53.910875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:53.968579 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:53.991543 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.254350 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.257139 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.258064 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.258899 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.262831 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.262973 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263162 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263266 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.263285 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.268209 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.269448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.271020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.275540 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.276282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.280711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"dnsmasq-dns-6ffb94d8ff-4mwgm\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.281164 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.292974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"horizon-f68f58447-zsv2n\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.309078 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"ceilometer-0\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.408336 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.479866 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.505769 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.649153 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.681210 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") pod \"6d4e60fd-bb4c-4460-87db-729dac85afbc\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.681366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") pod \"6d4e60fd-bb4c-4460-87db-729dac85afbc\" (UID: \"6d4e60fd-bb4c-4460-87db-729dac85afbc\") " Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.682135 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d4e60fd-bb4c-4460-87db-729dac85afbc" (UID: "6d4e60fd-bb4c-4460-87db-729dac85afbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.698194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tjb9t" event={"ID":"6d4e60fd-bb4c-4460-87db-729dac85afbc","Type":"ContainerDied","Data":"263587be6517e0d1cb792001b033558890df5477033a9df0c0e6071cd7b42b91"} Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.698239 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263587be6517e0d1cb792001b033558890df5477033a9df0c0e6071cd7b42b91" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.698311 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tjb9t" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.805889 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4e60fd-bb4c-4460-87db-729dac85afbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.806156 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf" (OuterVolumeSpecName: "kube-api-access-hwzwf") pod "6d4e60fd-bb4c-4460-87db-729dac85afbc" (UID: "6d4e60fd-bb4c-4460-87db-729dac85afbc"). InnerVolumeSpecName "kube-api-access-hwzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.898594 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.907171 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwzwf\" (UniqueName: \"kubernetes.io/projected/6d4e60fd-bb4c-4460-87db-729dac85afbc-kube-api-access-hwzwf\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.923449 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:05:54 crc kubenswrapper[4760]: I0121 16:05:54.932247 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:54 crc kubenswrapper[4760]: W0121 16:05:54.933127 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod154b8943_7072_4dbe_89b0_492e321973f1.slice/crio-bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e WatchSource:0}: Error finding container bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e: Status 404 returned error can't find the container with id bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.234201 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.239866 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.491180 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.503462 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.513031 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:05:55 crc kubenswrapper[4760]: I0121 16:05:55.522655 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.427216 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428057 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerStarted","Data":"91ae9aa025a96304b441062f61c991081e3d9849cdedbc921b2b4f5ea8668d8b"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428093 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" event={"ID":"f51922da-9bc6-45ad-91ec-9ebabdf2abff","Type":"ContainerStarted","Data":"340f43b565514298df116cb56e79d89aa15b4f1ab84740b3439e442248391c6b"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428107 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerStarted","Data":"70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428126 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerStarted","Data":"32c3ace8959ac2cefe8d6242f2e1a8ea1eca8c9122b7a433793bb83fc6d70f8f"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerStarted","Data":"af5ddb7d0cdc80be37d99d40d9448dcfd4fc35785a84df5df1d392f0ec375992"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428150 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f68f58447-zsv2n" event={"ID":"45befe43-dd76-4ea4-a09e-93342d93d9fc","Type":"ContainerStarted","Data":"4098327d7964885934fe26fdb3d844bea3c5bdd35f319a050aa0039e2e42a6c4"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.428162 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd74fcc-kvhmv" event={"ID":"154b8943-7072-4dbe-89b0-492e321973f1","Type":"ContainerStarted","Data":"bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e"} Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.524349 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.545245 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:05:56 crc kubenswrapper[4760]: E0121 16:05:56.545669 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerName="mariadb-account-create-update" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.545698 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerName="mariadb-account-create-update" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.545908 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" containerName="mariadb-account-create-update" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.546894 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.572802 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.612441 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640680 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640868 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640942 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.640985 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.742977 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744103 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744189 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.744226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.745521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.750872 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.752907 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.753311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.762969 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"horizon-56745f5bbf-pgk75\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:56 crc kubenswrapper[4760]: E0121 16:05:56.773380 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51922da_9bc6_45ad_91ec_9ebabdf2abff.slice/crio-d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:05:56 crc kubenswrapper[4760]: I0121 16:05:56.893838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.369514 4760 generic.go:334] "Generic (PLEG): container finished" podID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerID="d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8" exitCode=0 Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.369879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" event={"ID":"f51922da-9bc6-45ad-91ec-9ebabdf2abff","Type":"ContainerDied","Data":"d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.375713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerStarted","Data":"e122d7d9f206fc81eb1c1fe0d60d94f65902f9b343f8a1406a38254a99f711ae"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.398854 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerStarted","Data":"7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.404924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"67106a7322a6efcc713f80439a85e4ab5666dd6671b1f1903ab8d4cfe53081b5"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.412493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerStarted","Data":"a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977"} Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.425532 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jxk64" podStartSLOduration=5.42550296 podStartE2EDuration="5.42550296s" podCreationTimestamp="2026-01-21 16:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:57.420667932 +0000 UTC m=+1128.088437530" watchObservedRunningTime="2026-01-21 16:05:57.42550296 +0000 UTC m=+1128.093272548" Jan 21 16:05:57 crc kubenswrapper[4760]: I0121 16:05:57.450951 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-tp55g" podStartSLOduration=5.450927889 podStartE2EDuration="5.450927889s" podCreationTimestamp="2026-01-21 16:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:05:57.449619878 +0000 UTC m=+1128.117389456" watchObservedRunningTime="2026-01-21 16:05:57.450927889 +0000 UTC m=+1128.118697467" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.688362 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769283 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769408 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769605 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.769634 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") pod \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\" (UID: \"f51922da-9bc6-45ad-91ec-9ebabdf2abff\") " Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.774269 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b" (OuterVolumeSpecName: "kube-api-access-fpv8b") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "kube-api-access-fpv8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.793508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.795628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.798006 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config" (OuterVolumeSpecName: "config") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.798449 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f51922da-9bc6-45ad-91ec-9ebabdf2abff" (UID: "f51922da-9bc6-45ad-91ec-9ebabdf2abff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872129 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872447 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872459 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872470 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f51922da-9bc6-45ad-91ec-9ebabdf2abff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:57.872482 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpv8b\" (UniqueName: \"kubernetes.io/projected/f51922da-9bc6-45ad-91ec-9ebabdf2abff-kube-api-access-fpv8b\") on node \"crc\" DevicePath \"\"" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.531142 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" event={"ID":"f51922da-9bc6-45ad-91ec-9ebabdf2abff","Type":"ContainerDied","Data":"340f43b565514298df116cb56e79d89aa15b4f1ab84740b3439e442248391c6b"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.531193 4760 scope.go:117] "RemoveContainer" containerID="d7d5c3fe1360ae7995944485151df44bcecc0329aba948241876058485ca8be8" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.531309 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-47vt9" Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.543092 4760 generic.go:334] "Generic (PLEG): container finished" podID="24140731-e427-429e-a6cc-ad33f28eadb3" containerID="bba3d3f5c39e63bbea59396fd0379c03d80941c17b1ee5ae5aa8abc9754a2304" exitCode=0 Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.543224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerDied","Data":"bba3d3f5c39e63bbea59396fd0379c03d80941c17b1ee5ae5aa8abc9754a2304"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.543261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerStarted","Data":"6a1d3db7a9078b67e10847c534f3aeb922e669c4fb474cc218aaf633d69560d0"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.571227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"c198ca0f893d7e816ded081444b093dd445250dfc20fe6354495e0eb82b2f10a"} Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.688277 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:58 crc kubenswrapper[4760]: I0121 16:05:58.701754 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-47vt9"] Jan 21 16:05:59 crc kubenswrapper[4760]: I0121 16:05:59.020914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:05:59 crc kubenswrapper[4760]: W0121 16:05:59.041442 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef989152_b3b7_4ea7_be1b_25375dc04a66.slice/crio-a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0 WatchSource:0}: Error finding container a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0: Status 404 returned error can't find the container with id a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0 Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.237470 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" path="/var/lib/kubelet/pods/f51922da-9bc6-45ad-91ec-9ebabdf2abff/volumes" Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.586049 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"7639c826114d97c55c5361b21a293ddec9e88da42b6941e861a184c56c01cbeb"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.586545 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"9f1f8bf21fe7db516b5c897f8729acf4f669de40c55d404d8397410e26c1efa4"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.648104 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56745f5bbf-pgk75" event={"ID":"ef989152-b3b7-4ea7-be1b-25375dc04a66","Type":"ContainerStarted","Data":"a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.693837 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerStarted","Data":"f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730"} Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.694863 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:06:00 crc kubenswrapper[4760]: I0121 16:06:00.845845 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" podStartSLOduration=7.845813962 podStartE2EDuration="7.845813962s" podCreationTimestamp="2026-01-21 16:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:00.832009606 +0000 UTC m=+1131.499779204" watchObservedRunningTime="2026-01-21 16:06:00.845813962 +0000 UTC m=+1131.513583530" Jan 21 16:06:01 crc kubenswrapper[4760]: I0121 16:06:01.724096 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"dabda309686711f65fa09274526f555a8d5c1646fa15433a5f1b9445f71d9ab7"} Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.568537 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.602123 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:06:02 crc kubenswrapper[4760]: E0121 16:06:02.602703 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerName="init" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.602731 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerName="init" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.602913 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51922da-9bc6-45ad-91ec-9ebabdf2abff" containerName="init" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.604159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.617813 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.622881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668283 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668735 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.668910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.677569 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.696157 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c9896dc76-gwrzv"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.698160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.709730 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c9896dc76-gwrzv"] Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770513 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-tls-certs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770565 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770591 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770636 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770659 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-secret-key\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770697 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-config-data\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770725 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-scripts\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770772 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770804 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-combined-ca-bundle\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.770824 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-logs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.771764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.771619 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.773734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.773773 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfvgg\" (UniqueName: \"kubernetes.io/projected/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-kube-api-access-cfvgg\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.773840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.775112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.782354 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.791870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.799525 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.799758 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"horizon-789c75ff48-s7f9p\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-config-data\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-scripts\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880920 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-combined-ca-bundle\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880939 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-logs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.880997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfvgg\" (UniqueName: \"kubernetes.io/projected/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-kube-api-access-cfvgg\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.881062 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-tls-certs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.881111 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-secret-key\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.881810 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-scripts\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.882135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-logs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.884856 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-config-data\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.886911 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-tls-certs\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.888487 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-horizon-secret-key\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.896651 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-combined-ca-bundle\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.901312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfvgg\" (UniqueName: \"kubernetes.io/projected/0e7e96ce-a64f-4a21-97e1-b2ebabc7e236-kube-api-access-cfvgg\") pod \"horizon-5c9896dc76-gwrzv\" (UID: \"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236\") " pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:02 crc kubenswrapper[4760]: I0121 16:06:02.950693 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:03 crc kubenswrapper[4760]: I0121 16:06:03.092811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:03 crc kubenswrapper[4760]: I0121 16:06:03.763170 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:06:03 crc kubenswrapper[4760]: W0121 16:06:03.770676 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce8d17c_d046_45b5_9136_6faca838de63.slice/crio-99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2 WatchSource:0}: Error finding container 99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2: Status 404 returned error can't find the container with id 99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2 Jan 21 16:06:03 crc kubenswrapper[4760]: I0121 16:06:03.904148 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c9896dc76-gwrzv"] Jan 21 16:06:03 crc kubenswrapper[4760]: W0121 16:06:03.919199 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e7e96ce_a64f_4a21_97e1_b2ebabc7e236.slice/crio-7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1 WatchSource:0}: Error finding container 7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1: Status 404 returned error can't find the container with id 7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1 Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.482315 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.536958 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.537214 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" containerID="cri-o://2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a" gracePeriod=10 Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.772271 4760 generic.go:334] "Generic (PLEG): container finished" podID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerID="2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a" exitCode=0 Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.772380 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerDied","Data":"2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a"} Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.778585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"7f009908223fc26d83049a15597c51c8d8f54df769125f235e6d9cceb0e9a0d1"} Jan 21 16:06:04 crc kubenswrapper[4760]: I0121 16:06:04.782209 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2"} Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.788891 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.791336 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dt4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-65fzw_openstack(820ab298-8a58-4ac5-b7d2-ff030c6d2aff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.793492 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-65fzw" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.912114 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.958286 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.958419 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" event={"ID":"15134486-2d84-4c09-9a92-4df82dfcf01a","Type":"ContainerDied","Data":"20792d9d50cdd66cf195616cc85024c48427416914eaf3fb5d1c437e9ab718b7"} Jan 21 16:06:12 crc kubenswrapper[4760]: I0121 16:06:12.958495 4760 scope.go:117] "RemoveContainer" containerID="2a32131b8449d956c7baf21e7e66e7102a270602317e96706634151f8c2da87a" Jan 21 16:06:12 crc kubenswrapper[4760]: E0121 16:06:12.959921 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-65fzw" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037264 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037678 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037708 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.037770 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") pod \"15134486-2d84-4c09-9a92-4df82dfcf01a\" (UID: \"15134486-2d84-4c09-9a92-4df82dfcf01a\") " Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.045626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x" (OuterVolumeSpecName: "kube-api-access-bn24x") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "kube-api-access-bn24x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.086464 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.089300 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.090998 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.093251 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config" (OuterVolumeSpecName: "config") pod "15134486-2d84-4c09-9a92-4df82dfcf01a" (UID: "15134486-2d84-4c09-9a92-4df82dfcf01a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.139829 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140061 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140118 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140253 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15134486-2d84-4c09-9a92-4df82dfcf01a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.140378 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn24x\" (UniqueName: \"kubernetes.io/projected/15134486-2d84-4c09-9a92-4df82dfcf01a-kube-api-access-bn24x\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.301909 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.308462 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-gmwwz"] Jan 21 16:06:13 crc kubenswrapper[4760]: I0121 16:06:13.639477 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" path="/var/lib/kubelet/pods/15134486-2d84-4c09-9a92-4df82dfcf01a/volumes" Jan 21 16:06:14 crc kubenswrapper[4760]: I0121 16:06:14.229766 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-gmwwz" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: i/o timeout" Jan 21 16:06:14 crc kubenswrapper[4760]: I0121 16:06:14.978203 4760 generic.go:334] "Generic (PLEG): container finished" podID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerID="7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b" exitCode=0 Jan 21 16:06:14 crc kubenswrapper[4760]: I0121 16:06:14.978285 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerDied","Data":"7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b"} Jan 21 16:06:17 crc kubenswrapper[4760]: E0121 16:06:17.952132 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 16:06:17 crc kubenswrapper[4760]: E0121 16:06:17.952915 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n649h5f6h75h98h76h5cfh54ch56fh9ch65bh59dh8fhc9h6dh68ch8fh58chd6h678h5d7hc9h4h5bch64ch6dh58bh56fh574h68ch66dh54bh94q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxnm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f6cd74fcc-kvhmv_openstack(154b8943-7072-4dbe-89b0-492e321973f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:17 crc kubenswrapper[4760]: E0121 16:06:17.956089 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6f6cd74fcc-kvhmv" podUID="154b8943-7072-4dbe-89b0-492e321973f1" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.684287 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.685219 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.685288 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56chdh545hfh5bch64h5fhchc7h658h5h9fhbbh544h5d5h9fh55h54h5d8h5fh64fh546h578h664h78h54bh687h59h9ch54bh65h569q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ndzd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f68f58447-zsv2n_openstack(45befe43-dd76-4ea4-a09e-93342d93d9fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.685431 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68h5d8h596h54dhc4hddh54fh5d4h666h5b9h54fhf9h598hfdhd5h686h57ch544hffh85h5b5h569h547h89h5bbh87h56bh689h584h5h654h5bcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj9lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56745f5bbf-pgk75_openstack(ef989152-b3b7-4ea7-be1b-25375dc04a66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.688934 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-f68f58447-zsv2n" podUID="45befe43-dd76-4ea4-a09e-93342d93d9fc" Jan 21 16:06:21 crc kubenswrapper[4760]: E0121 16:06:21.689094 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-56745f5bbf-pgk75" podUID="ef989152-b3b7-4ea7-be1b-25375dc04a66" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.432688 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547455 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547588 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547723 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547843 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.547874 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") pod \"c3a06513-66a9-4f6f-b419-d6e6c6427547\" (UID: \"c3a06513-66a9-4f6f-b419-d6e6c6427547\") " Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.557547 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.558525 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts" (OuterVolumeSpecName: "scripts") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.559764 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h" (OuterVolumeSpecName: "kube-api-access-h7d8h") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "kube-api-access-h7d8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.571160 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.586840 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.584624 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data" (OuterVolumeSpecName: "config-data") pod "c3a06513-66a9-4f6f-b419-d6e6c6427547" (UID: "c3a06513-66a9-4f6f-b419-d6e6c6427547"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650083 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650118 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650127 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650138 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7d8h\" (UniqueName: \"kubernetes.io/projected/c3a06513-66a9-4f6f-b419-d6e6c6427547-kube-api-access-h7d8h\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650146 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:23 crc kubenswrapper[4760]: I0121 16:06:23.650154 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a06513-66a9-4f6f-b419-d6e6c6427547-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.066279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jxk64" event={"ID":"c3a06513-66a9-4f6f-b419-d6e6c6427547","Type":"ContainerDied","Data":"91ae9aa025a96304b441062f61c991081e3d9849cdedbc921b2b4f5ea8668d8b"} Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.066350 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ae9aa025a96304b441062f61c991081e3d9849cdedbc921b2b4f5ea8668d8b" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.066370 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jxk64" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.532206 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.540348 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jxk64"] Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.625666 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:06:24 crc kubenswrapper[4760]: E0121 16:06:24.626315 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerName="keystone-bootstrap" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.628497 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerName="keystone-bootstrap" Jan 21 16:06:24 crc kubenswrapper[4760]: E0121 16:06:24.628549 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.628561 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" Jan 21 16:06:24 crc kubenswrapper[4760]: E0121 16:06:24.628589 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="init" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.628600 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="init" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.629128 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="15134486-2d84-4c09-9a92-4df82dfcf01a" containerName="dnsmasq-dns" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.629165 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" containerName="keystone-bootstrap" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.630167 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.634924 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.635269 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5vfwv" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.635301 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.637592 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.637760 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.659892 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.688148 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.693680 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696308 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696710 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.696761 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798470 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798776 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.798801 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.811199 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.811485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.814294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.817428 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.819859 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.820967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"keystone-bootstrap-f8htt\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:24 crc kubenswrapper[4760]: I0121 16:06:24.965984 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:25 crc kubenswrapper[4760]: I0121 16:06:25.637269 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a06513-66a9-4f6f-b419-d6e6c6427547" path="/var/lib/kubelet/pods/c3a06513-66a9-4f6f-b419-d6e6c6427547/volumes" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.790497 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.912725 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.912797 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.912868 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.913006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.913094 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") pod \"154b8943-7072-4dbe-89b0-492e321973f1\" (UID: \"154b8943-7072-4dbe-89b0-492e321973f1\") " Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.913786 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs" (OuterVolumeSpecName: "logs") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.914170 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts" (OuterVolumeSpecName: "scripts") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.914339 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data" (OuterVolumeSpecName: "config-data") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.918920 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7" (OuterVolumeSpecName: "kube-api-access-zxnm7") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "kube-api-access-zxnm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:35 crc kubenswrapper[4760]: I0121 16:06:35.923496 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "154b8943-7072-4dbe-89b0-492e321973f1" (UID: "154b8943-7072-4dbe-89b0-492e321973f1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015661 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015705 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154b8943-7072-4dbe-89b0-492e321973f1-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015722 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/154b8943-7072-4dbe-89b0-492e321973f1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015736 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/154b8943-7072-4dbe-89b0-492e321973f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.015747 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxnm7\" (UniqueName: \"kubernetes.io/projected/154b8943-7072-4dbe-89b0-492e321973f1-kube-api-access-zxnm7\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.231199 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6cd74fcc-kvhmv" event={"ID":"154b8943-7072-4dbe-89b0-492e321973f1","Type":"ContainerDied","Data":"bb6fe64ae2a5bfa113a0cdb0b1c062a0f09bfca14434bc76b9e8979373673e5e"} Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.231262 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6cd74fcc-kvhmv" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.248863 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.249071 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crhwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pgvwf_openstack(e272905b-28ec-4f49-8c51-f5c5d97c4a9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.250288 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pgvwf" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.302339 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.313805 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.352403 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.367992 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f6cd74fcc-kvhmv"] Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427143 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427212 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427350 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427389 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427425 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427539 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427576 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427621 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427695 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") pod \"ef989152-b3b7-4ea7-be1b-25375dc04a66\" (UID: \"ef989152-b3b7-4ea7-be1b-25375dc04a66\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427733 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") pod \"45befe43-dd76-4ea4-a09e-93342d93d9fc\" (UID: \"45befe43-dd76-4ea4-a09e-93342d93d9fc\") " Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.427807 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs" (OuterVolumeSpecName: "logs") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.428297 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45befe43-dd76-4ea4-a09e-93342d93d9fc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.428853 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts" (OuterVolumeSpecName: "scripts") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.428857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts" (OuterVolumeSpecName: "scripts") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.429225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data" (OuterVolumeSpecName: "config-data") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.429281 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data" (OuterVolumeSpecName: "config-data") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.429559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs" (OuterVolumeSpecName: "logs") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.432478 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2" (OuterVolumeSpecName: "kube-api-access-ndzd2") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "kube-api-access-ndzd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.432515 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx" (OuterVolumeSpecName: "kube-api-access-wj9lx") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "kube-api-access-wj9lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.433168 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ef989152-b3b7-4ea7-be1b-25375dc04a66" (UID: "ef989152-b3b7-4ea7-be1b-25375dc04a66"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.434018 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "45befe43-dd76-4ea4-a09e-93342d93d9fc" (UID: "45befe43-dd76-4ea4-a09e-93342d93d9fc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529881 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj9lx\" (UniqueName: \"kubernetes.io/projected/ef989152-b3b7-4ea7-be1b-25375dc04a66-kube-api-access-wj9lx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529952 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529965 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ef989152-b3b7-4ea7-be1b-25375dc04a66-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529977 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzd2\" (UniqueName: \"kubernetes.io/projected/45befe43-dd76-4ea4-a09e-93342d93d9fc-kube-api-access-ndzd2\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.529991 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530001 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef989152-b3b7-4ea7-be1b-25375dc04a66-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530011 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ef989152-b3b7-4ea7-be1b-25375dc04a66-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530021 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/45befe43-dd76-4ea4-a09e-93342d93d9fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.530032 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/45befe43-dd76-4ea4-a09e-93342d93d9fc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:36 crc kubenswrapper[4760]: I0121 16:06:36.649600 4760 scope.go:117] "RemoveContainer" containerID="433441af784888a0e207259e7f17ab26778cc42126d4f3c1404bb39f079669b2" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.651650 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 21 16:06:36 crc kubenswrapper[4760]: E0121 16:06:36.651895 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h4h565h5c8h56hd9h586h57ch678h84hd7h59dh586hffhcfh648hfh584h67h7h585h77h5b6h5d8h598hbdh5b6h58fh5f7h55fh6ch5dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxfdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a8b68aa1-7489-4689-ad6b-8aa7149b9a67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.242881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f68f58447-zsv2n" event={"ID":"45befe43-dd76-4ea4-a09e-93342d93d9fc","Type":"ContainerDied","Data":"4098327d7964885934fe26fdb3d844bea3c5bdd35f319a050aa0039e2e42a6c4"} Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.243035 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f68f58447-zsv2n" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.249124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56745f5bbf-pgk75" event={"ID":"ef989152-b3b7-4ea7-be1b-25375dc04a66","Type":"ContainerDied","Data":"a32d15c9b6c67417ebe4808bf8b666b59029f0e4babb24e4903d1d4bbffdffd0"} Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.249279 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56745f5bbf-pgk75" Jan 21 16:06:37 crc kubenswrapper[4760]: E0121 16:06:37.254693 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pgvwf" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.323938 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.339350 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56745f5bbf-pgk75"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.359979 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.369878 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f68f58447-zsv2n"] Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.632923 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="154b8943-7072-4dbe-89b0-492e321973f1" path="/var/lib/kubelet/pods/154b8943-7072-4dbe-89b0-492e321973f1/volumes" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.633819 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45befe43-dd76-4ea4-a09e-93342d93d9fc" path="/var/lib/kubelet/pods/45befe43-dd76-4ea4-a09e-93342d93d9fc/volumes" Jan 21 16:06:37 crc kubenswrapper[4760]: I0121 16:06:37.634379 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef989152-b3b7-4ea7-be1b-25375dc04a66" path="/var/lib/kubelet/pods/ef989152-b3b7-4ea7-be1b-25375dc04a66/volumes" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.132530 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.132734 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt8ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-j76bd_openstack(3bf0e00e-fc38-45a9-8615-dd5398ed1209): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.133809 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-j76bd" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" Jan 21 16:06:39 crc kubenswrapper[4760]: E0121 16:06:39.282313 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-j76bd" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" Jan 21 16:06:39 crc kubenswrapper[4760]: I0121 16:06:39.656247 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.295765 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"340c01cdcf2f2bf822ee52bac7bceb6d0fa14c13044c6e5a77ce83c33c7b5c6c"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.296471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"4c8599b9d780198c2e4b87da79727f4ec8db99c1cf644a8081f05cfdadd7c0fd"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.296485 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"9f965f18c37b7bf9df923528c06df69fca180f02d2756802de31cfe58e59967f"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.299480 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerStarted","Data":"b6b482d4a0ed30a32f26d81fe7bb9825ad6753daa61a0bf7f635904abc045030"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.301863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.301902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"dcc4ec3e7e63c420dc623ad8e6e729063186fe795c1b683f0877ecd35a0b9576"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.305663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.305729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.313154 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerStarted","Data":"ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.316699 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerStarted","Data":"f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb"} Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.341747 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c9896dc76-gwrzv" podStartSLOduration=3.144842671 podStartE2EDuration="38.341721268s" podCreationTimestamp="2026-01-21 16:06:02 +0000 UTC" firstStartedPulling="2026-01-21 16:06:03.92147964 +0000 UTC m=+1134.589249218" lastFinishedPulling="2026-01-21 16:06:39.118358227 +0000 UTC m=+1169.786127815" observedRunningTime="2026-01-21 16:06:40.334431204 +0000 UTC m=+1171.002200792" watchObservedRunningTime="2026-01-21 16:06:40.341721268 +0000 UTC m=+1171.009490846" Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.388763 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nf7wp" podStartSLOduration=10.898282722 podStartE2EDuration="1m23.388743363s" podCreationTimestamp="2026-01-21 16:05:17 +0000 UTC" firstStartedPulling="2026-01-21 16:05:23.215453059 +0000 UTC m=+1093.883222637" lastFinishedPulling="2026-01-21 16:06:35.7059137 +0000 UTC m=+1166.373683278" observedRunningTime="2026-01-21 16:06:40.385412823 +0000 UTC m=+1171.053182421" watchObservedRunningTime="2026-01-21 16:06:40.388743363 +0000 UTC m=+1171.056512941" Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.398528 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-789c75ff48-s7f9p" podStartSLOduration=3.059092035 podStartE2EDuration="38.398503626s" podCreationTimestamp="2026-01-21 16:06:02 +0000 UTC" firstStartedPulling="2026-01-21 16:06:03.774200073 +0000 UTC m=+1134.441969651" lastFinishedPulling="2026-01-21 16:06:39.113611664 +0000 UTC m=+1169.781381242" observedRunningTime="2026-01-21 16:06:40.358422827 +0000 UTC m=+1171.026192405" watchObservedRunningTime="2026-01-21 16:06:40.398503626 +0000 UTC m=+1171.066273204" Jan 21 16:06:40 crc kubenswrapper[4760]: I0121 16:06:40.416190 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-65fzw" podStartSLOduration=3.749538538 podStartE2EDuration="47.416162438s" podCreationTimestamp="2026-01-21 16:05:53 +0000 UTC" firstStartedPulling="2026-01-21 16:05:55.505649957 +0000 UTC m=+1126.173419535" lastFinishedPulling="2026-01-21 16:06:39.172273857 +0000 UTC m=+1169.840043435" observedRunningTime="2026-01-21 16:06:40.405719549 +0000 UTC m=+1171.073489127" watchObservedRunningTime="2026-01-21 16:06:40.416162438 +0000 UTC m=+1171.083932016" Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.349805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerStarted","Data":"5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f"} Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.374923 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25"} Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.376296 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f8htt" podStartSLOduration=17.376272924 podStartE2EDuration="17.376272924s" podCreationTimestamp="2026-01-21 16:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:41.375639899 +0000 UTC m=+1172.043409487" watchObservedRunningTime="2026-01-21 16:06:41.376272924 +0000 UTC m=+1172.044042502" Jan 21 16:06:41 crc kubenswrapper[4760]: I0121 16:06:41.382835 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"9089d9acaef48e1a1217ad3ab5ac84ed29f20ddc981adbf66038218441a890e4"} Jan 21 16:06:42 crc kubenswrapper[4760]: I0121 16:06:42.418743 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"b52a4964bcc0e0001d5dd35829c81ef7fb1576a44ed532feab0e950f7c2e8c65"} Jan 21 16:06:42 crc kubenswrapper[4760]: I0121 16:06:42.424695 4760 generic.go:334] "Generic (PLEG): container finished" podID="753473df-c019-484a-95d5-01f46173e10a" containerID="a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977" exitCode=0 Jan 21 16:06:42 crc kubenswrapper[4760]: I0121 16:06:42.425805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerDied","Data":"a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977"} Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.052638 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.052713 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.315939 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.318339 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.436839 4760 generic.go:334] "Generic (PLEG): container finished" podID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerID="f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb" exitCode=0 Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.436966 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerDied","Data":"f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb"} Jan 21 16:06:43 crc kubenswrapper[4760]: I0121 16:06:43.474593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"7d5f7d1b98417868c935b42e4d87e4a49de52b8514a5b103ed68e63c6c1470b3"} Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.338262 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.451114 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") pod \"753473df-c019-484a-95d5-01f46173e10a\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.451421 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") pod \"753473df-c019-484a-95d5-01f46173e10a\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.451523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") pod \"753473df-c019-484a-95d5-01f46173e10a\" (UID: \"753473df-c019-484a-95d5-01f46173e10a\") " Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.638900 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh" (OuterVolumeSpecName: "kube-api-access-tzgmh") pod "753473df-c019-484a-95d5-01f46173e10a" (UID: "753473df-c019-484a-95d5-01f46173e10a"). InnerVolumeSpecName "kube-api-access-tzgmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.639864 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzgmh\" (UniqueName: \"kubernetes.io/projected/753473df-c019-484a-95d5-01f46173e10a-kube-api-access-tzgmh\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.669458 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "753473df-c019-484a-95d5-01f46173e10a" (UID: "753473df-c019-484a-95d5-01f46173e10a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.674366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config" (OuterVolumeSpecName: "config") pod "753473df-c019-484a-95d5-01f46173e10a" (UID: "753473df-c019-484a-95d5-01f46173e10a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.742569 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.742626 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/753473df-c019-484a-95d5-01f46173e10a-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.799924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-tp55g" event={"ID":"753473df-c019-484a-95d5-01f46173e10a","Type":"ContainerDied","Data":"70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4"} Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.800569 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4" Jan 21 16:06:47 crc kubenswrapper[4760]: I0121 16:06:47.800240 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-tp55g" Jan 21 16:06:47 crc kubenswrapper[4760]: E0121 16:06:47.998260 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod753473df_c019_484a_95d5_01f46173e10a.slice/crio-70907ce332e1a9f9fa5d75d7c40d92f0205ed257fe3dfaea02a1a05b5443bda4\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod753473df_c019_484a_95d5_01f46173e10a.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.564746 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:48 crc kubenswrapper[4760]: E0121 16:06:48.565101 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753473df-c019-484a-95d5-01f46173e10a" containerName="neutron-db-sync" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.565113 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="753473df-c019-484a-95d5-01f46173e10a" containerName="neutron-db-sync" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.567094 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="753473df-c019-484a-95d5-01f46173e10a" containerName="neutron-db-sync" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.568141 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.599165 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.657876 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.669215 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670426 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670647 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670752 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670792 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670836 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.670875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.672194 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679460 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679505 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679749 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.679755 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4pz29" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.774425 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775246 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775468 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775584 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775625 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.775693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.776799 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.777517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.777976 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.779451 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.787983 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.788782 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.799620 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.801255 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"dnsmasq-dns-6b46f56485-gbws9\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.801355 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.804909 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"neutron-64f66997d8-wj49l\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.829940 4760 generic.go:334] "Generic (PLEG): container finished" podID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerID="5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f" exitCode=0 Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.830011 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerDied","Data":"5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f"} Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.897070 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:48 crc kubenswrapper[4760]: I0121 16:06:48.994293 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.696430 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.779689 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.828774 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.828861 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.828941 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.829016 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.829064 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.829116 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") pod \"20523ada-9ffa-4d1d-bf08-913672aa7df6\" (UID: \"20523ada-9ffa-4d1d-bf08-913672aa7df6\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.882536 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.887293 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.887446 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts" (OuterVolumeSpecName: "scripts") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.890313 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd" (OuterVolumeSpecName: "kube-api-access-c6qbd") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "kube-api-access-c6qbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.898541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data" (OuterVolumeSpecName: "config-data") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.900540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20523ada-9ffa-4d1d-bf08-913672aa7df6" (UID: "20523ada-9ffa-4d1d-bf08-913672aa7df6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932420 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932488 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932513 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932548 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.932727 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") pod \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\" (UID: \"820ab298-8a58-4ac5-b7d2-ff030c6d2aff\") " Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933104 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933122 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6qbd\" (UniqueName: \"kubernetes.io/projected/20523ada-9ffa-4d1d-bf08-913672aa7df6-kube-api-access-c6qbd\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933132 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933141 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933149 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933157 4760 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20523ada-9ffa-4d1d-bf08-913672aa7df6-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.933505 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs" (OuterVolumeSpecName: "logs") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.936749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g" (OuterVolumeSpecName: "kube-api-access-8dt4g") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "kube-api-access-8dt4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.937984 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c6778d77f-gkzrk"] Jan 21 16:06:50 crc kubenswrapper[4760]: E0121 16:06:50.938435 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerName="keystone-bootstrap" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938455 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerName="keystone-bootstrap" Jan 21 16:06:50 crc kubenswrapper[4760]: E0121 16:06:50.938467 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerName="placement-db-sync" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938474 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerName="placement-db-sync" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938626 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" containerName="keystone-bootstrap" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.938652 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" containerName="placement-db-sync" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.939923 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.941313 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts" (OuterVolumeSpecName: "scripts") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.946404 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.946670 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.991820 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-65fzw" event={"ID":"820ab298-8a58-4ac5-b7d2-ff030c6d2aff","Type":"ContainerDied","Data":"32c3ace8959ac2cefe8d6242f2e1a8ea1eca8c9122b7a433793bb83fc6d70f8f"} Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.991862 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c3ace8959ac2cefe8d6242f2e1a8ea1eca8c9122b7a433793bb83fc6d70f8f" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.991938 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-65fzw" Jan 21 16:06:50 crc kubenswrapper[4760]: I0121 16:06:50.994675 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6778d77f-gkzrk"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.013351 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5b497869f9-hs8kf"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.015656 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.022392 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.022707 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-public-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035673 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfjv\" (UniqueName: \"kubernetes.io/projected/42e45354-7553-43f2-af5a-613dd1a6dde9-kube-api-access-5bfjv\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-httpd-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-internal-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035855 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-ovndb-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-combined-ca-bundle\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035933 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035944 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dt4g\" (UniqueName: \"kubernetes.io/projected/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-kube-api-access-8dt4g\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.035953 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.036105 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data" (OuterVolumeSpecName: "config-data") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.040154 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "820ab298-8a58-4ac5-b7d2-ff030c6d2aff" (UID: "820ab298-8a58-4ac5-b7d2-ff030c6d2aff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.048172 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b497869f9-hs8kf"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.075357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f8htt" event={"ID":"20523ada-9ffa-4d1d-bf08-913672aa7df6","Type":"ContainerDied","Data":"b6b482d4a0ed30a32f26d81fe7bb9825ad6753daa61a0bf7f635904abc045030"} Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.075407 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b482d4a0ed30a32f26d81fe7bb9825ad6753daa61a0bf7f635904abc045030" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.075495 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f8htt" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.130801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137234 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-internal-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137317 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-combined-ca-bundle\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137367 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-credential-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137414 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-fernet-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137449 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-ovndb-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-config-data\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-combined-ca-bundle\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-internal-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-scripts\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-public-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137649 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-public-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfjv\" (UniqueName: \"kubernetes.io/projected/42e45354-7553-43f2-af5a-613dd1a6dde9-kube-api-access-5bfjv\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137736 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g7m5\" (UniqueName: \"kubernetes.io/projected/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-kube-api-access-9g7m5\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137776 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-httpd-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137807 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137900 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.137914 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820ab298-8a58-4ac5-b7d2-ff030c6d2aff-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.151142 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-combined-ca-bundle\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.152300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-ovndb-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.156911 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-internal-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.166155 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-public-tls-certs\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.167432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.183576 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/42e45354-7553-43f2-af5a-613dd1a6dde9-httpd-config\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.189408 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfjv\" (UniqueName: \"kubernetes.io/projected/42e45354-7553-43f2-af5a-613dd1a6dde9-kube-api-access-5bfjv\") pod \"neutron-6c6778d77f-gkzrk\" (UID: \"42e45354-7553-43f2-af5a-613dd1a6dde9\") " pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.239966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-combined-ca-bundle\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.239997 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-credential-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240038 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-fernet-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-config-data\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240105 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-internal-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240138 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-scripts\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.240967 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-public-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.241231 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g7m5\" (UniqueName: \"kubernetes.io/projected/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-kube-api-access-9g7m5\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.251685 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-scripts\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.261479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-public-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.261490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-credential-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.262659 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-config-data\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.263002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-fernet-keys\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.263102 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-internal-tls-certs\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.267194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g7m5\" (UniqueName: \"kubernetes.io/projected/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-kube-api-access-9g7m5\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.267403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42613e5a-e22d-4358-8cd2-1ebfd1a42b55-combined-ca-bundle\") pod \"keystone-5b497869f9-hs8kf\" (UID: \"42613e5a-e22d-4358-8cd2-1ebfd1a42b55\") " pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.277694 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.373772 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.381022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:06:51 crc kubenswrapper[4760]: W0121 16:06:51.446511 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91fc26b9_373e_446b_8345_eae2740aac66.slice/crio-4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78 WatchSource:0}: Error finding container 4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78: Status 404 returned error can't find the container with id 4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78 Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.934131 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-65c954fbbd-tb9kj"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.946617 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.950294 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.950704 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jlmf6" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.951855 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.952452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.952725 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.961877 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65c954fbbd-tb9kj"] Jan 21 16:06:51 crc kubenswrapper[4760]: I0121 16:06:51.980726 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5b497869f9-hs8kf"] Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.065143 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-public-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.065964 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-scripts\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066111 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdkl\" (UniqueName: \"kubernetes.io/projected/b3582d40-46db-4b7b-a7ca-12950184f371-kube-api-access-bgdkl\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3582d40-46db-4b7b-a7ca-12950184f371-logs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-config-data\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066443 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-internal-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.066525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-combined-ca-bundle\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.134529 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b497869f9-hs8kf" event={"ID":"42613e5a-e22d-4358-8cd2-1ebfd1a42b55","Type":"ContainerStarted","Data":"78422f64755a85781bf07ee1aaff446d3b3742c10a88b081b9061f0433ea7d87"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.143029 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6778d77f-gkzrk"] Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.149694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerStarted","Data":"a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.150229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerStarted","Data":"4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78"} Jan 21 16:06:52 crc kubenswrapper[4760]: W0121 16:06:52.155559 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42e45354_7553_43f2_af5a_613dd1a6dde9.slice/crio-95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6 WatchSource:0}: Error finding container 95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6: Status 404 returned error can't find the container with id 95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6 Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.156027 4760 generic.go:334] "Generic (PLEG): container finished" podID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" exitCode=0 Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.156230 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerDied","Data":"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.156270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerStarted","Data":"eaafe848834c73555642821066dc7bea67c04d548195e7becd244c407f552006"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169017 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-scripts\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169144 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdkl\" (UniqueName: \"kubernetes.io/projected/b3582d40-46db-4b7b-a7ca-12950184f371-kube-api-access-bgdkl\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169175 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3582d40-46db-4b7b-a7ca-12950184f371-logs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169224 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-config-data\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169275 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-internal-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169339 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-combined-ca-bundle\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.169377 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-public-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.170063 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3582d40-46db-4b7b-a7ca-12950184f371-logs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.186983 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-scripts\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.187711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-public-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.189143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-internal-tls-certs\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.195839 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-config-data\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.196818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3582d40-46db-4b7b-a7ca-12950184f371-combined-ca-bundle\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.225784 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdkl\" (UniqueName: \"kubernetes.io/projected/b3582d40-46db-4b7b-a7ca-12950184f371-kube-api-access-bgdkl\") pod \"placement-65c954fbbd-tb9kj\" (UID: \"b3582d40-46db-4b7b-a7ca-12950184f371\") " pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.275916 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"0bb520c7a5b8a3157dd40b8ec8b031774f02de4d2dd903b6e8240361f7e50ac2"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.276163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"b8e6da9f3928bf31c322679ebf3df1dd98e20de9c8f1f9b902c41a974ced7259"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.276248 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"b5ebc7b6189a31e108002e918e4e31b5da05d31404579da0fae13ab21bc8576d"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.279242 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerStarted","Data":"898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.306812 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.330298 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pgvwf" podStartSLOduration=4.428343912 podStartE2EDuration="59.330269181s" podCreationTimestamp="2026-01-21 16:05:53 +0000 UTC" firstStartedPulling="2026-01-21 16:05:56.410094437 +0000 UTC m=+1127.077864015" lastFinishedPulling="2026-01-21 16:06:51.312019706 +0000 UTC m=+1181.979789284" observedRunningTime="2026-01-21 16:06:52.322049334 +0000 UTC m=+1182.989818912" watchObservedRunningTime="2026-01-21 16:06:52.330269181 +0000 UTC m=+1182.998038759" Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.351832 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089"} Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.870268 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-65c954fbbd-tb9kj"] Jan 21 16:06:52 crc kubenswrapper[4760]: W0121 16:06:52.893867 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3582d40_46db_4b7b_a7ca_12950184f371.slice/crio-8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4 WatchSource:0}: Error finding container 8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4: Status 404 returned error can't find the container with id 8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4 Jan 21 16:06:52 crc kubenswrapper[4760]: I0121 16:06:52.962469 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.105789 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.378140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5b497869f9-hs8kf" event={"ID":"42613e5a-e22d-4358-8cd2-1ebfd1a42b55","Type":"ContainerStarted","Data":"73193f81a0de0ed95c24da22b4f2e5deb121f24cfa492e0a9a01860d8772b3df"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.378937 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.381040 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6778d77f-gkzrk" event={"ID":"42e45354-7553-43f2-af5a-613dd1a6dde9","Type":"ContainerStarted","Data":"9c50959c0f8b06a902639ee774df9e288497693e385c2c1bf435a069bbfbc1d6"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.381077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6778d77f-gkzrk" event={"ID":"42e45354-7553-43f2-af5a-613dd1a6dde9","Type":"ContainerStarted","Data":"95acb7797c4b6f608f212c0cb01f03588e629cbd10ab2e170bc58568ffaea5d6"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.388891 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65c954fbbd-tb9kj" event={"ID":"b3582d40-46db-4b7b-a7ca-12950184f371","Type":"ContainerStarted","Data":"8cae2e3c076b600a45045589568f21acf1b67782cbc8a155c3297c0219e231a4"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.401164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerStarted","Data":"32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.402560 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.415661 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5b497869f9-hs8kf" podStartSLOduration=3.415631092 podStartE2EDuration="3.415631092s" podCreationTimestamp="2026-01-21 16:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:53.39841594 +0000 UTC m=+1184.066185518" watchObservedRunningTime="2026-01-21 16:06:53.415631092 +0000 UTC m=+1184.083400670" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.417599 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerStarted","Data":"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.420778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.434152 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64f66997d8-wj49l" podStartSLOduration=5.434128264 podStartE2EDuration="5.434128264s" podCreationTimestamp="2026-01-21 16:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:53.430240261 +0000 UTC m=+1184.098009839" watchObservedRunningTime="2026-01-21 16:06:53.434128264 +0000 UTC m=+1184.101897842" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.450929 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"f017047951b6689f4832549f39b119810a765060bdc253c62f437b8b528b8909"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.451246 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d1ccc2ed-d1e8-4b84-807d-55d70e8def12","Type":"ContainerStarted","Data":"41b01336b076e472cd1804983c90267de14f09495934da2a6ea36f06788bb676"} Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.467535 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" podStartSLOduration=5.467504432 podStartE2EDuration="5.467504432s" podCreationTimestamp="2026-01-21 16:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:53.45487435 +0000 UTC m=+1184.122643938" watchObservedRunningTime="2026-01-21 16:06:53.467504432 +0000 UTC m=+1184.135274010" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.512915 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=60.253218693 podStartE2EDuration="1m50.512887348s" podCreationTimestamp="2026-01-21 16:05:03 +0000 UTC" firstStartedPulling="2026-01-21 16:05:51.779259489 +0000 UTC m=+1122.447029067" lastFinishedPulling="2026-01-21 16:06:42.038928144 +0000 UTC m=+1172.706697722" observedRunningTime="2026-01-21 16:06:53.49666435 +0000 UTC m=+1184.164433928" watchObservedRunningTime="2026-01-21 16:06:53.512887348 +0000 UTC m=+1184.180656926" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.849346 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.873388 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.875132 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.884702 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.892242 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964516 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964643 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.964948 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:53 crc kubenswrapper[4760]: I0121 16:06:53.965007 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066553 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.066791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.067995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.068830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.069071 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.069357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.071448 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.100584 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"dnsmasq-dns-79cd4f6685-krlfc\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.206288 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.466424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65c954fbbd-tb9kj" event={"ID":"b3582d40-46db-4b7b-a7ca-12950184f371","Type":"ContainerStarted","Data":"3cf94ef28def6f5e5b7c2c66b89fc19c9e4151c3a8a6a3a7f9559896e44788f2"} Jan 21 16:06:54 crc kubenswrapper[4760]: I0121 16:06:54.764101 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.478808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6778d77f-gkzrk" event={"ID":"42e45354-7553-43f2-af5a-613dd1a6dde9","Type":"ContainerStarted","Data":"d7122d4f8629f3a210f8b46e9616a49444c8942c0c0083ca57f2b14681d06550"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.480663 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.486754 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-65c954fbbd-tb9kj" event={"ID":"b3582d40-46db-4b7b-a7ca-12950184f371","Type":"ContainerStarted","Data":"ac71d8fbf1acfc7a2a618eb21a067b2eb332ec28af1b6d447b21863a44cef029"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.486969 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.486984 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.492377 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerDied","Data":"a098324835da34928446d54bd96c4c7824059772f43d6b1feb5b03ad7acd1d1f"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.493105 4760 generic.go:334] "Generic (PLEG): container finished" podID="13f413eb-0ded-492d-83fa-5d255f83b266" containerID="a098324835da34928446d54bd96c4c7824059772f43d6b1feb5b03ad7acd1d1f" exitCode=0 Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.493171 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerStarted","Data":"8919eb03095eb42378f58031dd0adc0256195d0b5fc9458c192795f5f1457bd7"} Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.493483 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" containerID="cri-o://bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" gracePeriod=10 Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.516684 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c6778d77f-gkzrk" podStartSLOduration=5.516656206 podStartE2EDuration="5.516656206s" podCreationTimestamp="2026-01-21 16:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:55.510196901 +0000 UTC m=+1186.177966479" watchObservedRunningTime="2026-01-21 16:06:55.516656206 +0000 UTC m=+1186.184425784" Jan 21 16:06:55 crc kubenswrapper[4760]: I0121 16:06:55.657441 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-65c954fbbd-tb9kj" podStartSLOduration=4.657416412 podStartE2EDuration="4.657416412s" podCreationTimestamp="2026-01-21 16:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:55.617789515 +0000 UTC m=+1186.285559103" watchObservedRunningTime="2026-01-21 16:06:55.657416412 +0000 UTC m=+1186.325185990" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.301994 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449443 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449558 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449603 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449643 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.449683 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") pod \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\" (UID: \"c2d3b257-75ab-4b85-b13b-081bf5b4825e\") " Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.478018 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx" (OuterVolumeSpecName: "kube-api-access-ffxvx") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "kube-api-access-ffxvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.515840 4760 generic.go:334] "Generic (PLEG): container finished" podID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" exitCode=0 Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517142 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517763 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerDied","Data":"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054"} Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517805 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b46f56485-gbws9" event={"ID":"c2d3b257-75ab-4b85-b13b-081bf5b4825e","Type":"ContainerDied","Data":"eaafe848834c73555642821066dc7bea67c04d548195e7becd244c407f552006"} Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.517831 4760 scope.go:117] "RemoveContainer" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.525621 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config" (OuterVolumeSpecName: "config") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.553958 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffxvx\" (UniqueName: \"kubernetes.io/projected/c2d3b257-75ab-4b85-b13b-081bf5b4825e-kube-api-access-ffxvx\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.554022 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.556696 4760 scope.go:117] "RemoveContainer" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.564384 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.566448 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.584971 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2d3b257-75ab-4b85-b13b-081bf5b4825e" (UID: "c2d3b257-75ab-4b85-b13b-081bf5b4825e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.591418 4760 scope.go:117] "RemoveContainer" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" Jan 21 16:06:56 crc kubenswrapper[4760]: E0121 16:06:56.592913 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054\": container with ID starting with bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054 not found: ID does not exist" containerID="bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.593176 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054"} err="failed to get container status \"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054\": rpc error: code = NotFound desc = could not find container \"bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054\": container with ID starting with bfd7cccf91406ce385eba94b4e7d2890ec30e85d9369b87b19b4d41e360b2054 not found: ID does not exist" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.593257 4760 scope.go:117] "RemoveContainer" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" Jan 21 16:06:56 crc kubenswrapper[4760]: E0121 16:06:56.593893 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444\": container with ID starting with 81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444 not found: ID does not exist" containerID="81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.594563 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444"} err="failed to get container status \"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444\": rpc error: code = NotFound desc = could not find container \"81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444\": container with ID starting with 81257208905e23b7deeb6f95a7d5792368fab283dcded98d3eb8c4c312d84444 not found: ID does not exist" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.657536 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.657585 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.657945 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2d3b257-75ab-4b85-b13b-081bf5b4825e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.867399 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:56 crc kubenswrapper[4760]: I0121 16:06:56.879691 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b46f56485-gbws9"] Jan 21 16:06:57 crc kubenswrapper[4760]: I0121 16:06:57.530016 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerStarted","Data":"7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659"} Jan 21 16:06:57 crc kubenswrapper[4760]: I0121 16:06:57.642277 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" path="/var/lib/kubelet/pods/c2d3b257-75ab-4b85-b13b-081bf5b4825e/volumes" Jan 21 16:06:58 crc kubenswrapper[4760]: I0121 16:06:58.542105 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerStarted","Data":"598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4"} Jan 21 16:06:59 crc kubenswrapper[4760]: I0121 16:06:59.560839 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:06:59 crc kubenswrapper[4760]: I0121 16:06:59.582240 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-j76bd" podStartSLOduration=7.559219672 podStartE2EDuration="1m7.582214819s" podCreationTimestamp="2026-01-21 16:05:52 +0000 UTC" firstStartedPulling="2026-01-21 16:05:55.251907696 +0000 UTC m=+1125.919677274" lastFinishedPulling="2026-01-21 16:06:55.274902843 +0000 UTC m=+1185.942672421" observedRunningTime="2026-01-21 16:06:59.580450287 +0000 UTC m=+1190.248219865" watchObservedRunningTime="2026-01-21 16:06:59.582214819 +0000 UTC m=+1190.249984397" Jan 21 16:06:59 crc kubenswrapper[4760]: I0121 16:06:59.600439 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" podStartSLOduration=6.600411675 podStartE2EDuration="6.600411675s" podCreationTimestamp="2026-01-21 16:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:06:59.598899138 +0000 UTC m=+1190.266668726" watchObservedRunningTime="2026-01-21 16:06:59.600411675 +0000 UTC m=+1190.268181263" Jan 21 16:07:02 crc kubenswrapper[4760]: I0121 16:07:02.953030 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:07:03 crc kubenswrapper[4760]: I0121 16:07:03.093395 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.207690 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.271375 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.271726 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" containerID="cri-o://f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730" gracePeriod=10 Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.482234 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.619133 4760 generic.go:334] "Generic (PLEG): container finished" podID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerID="898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69" exitCode=0 Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.619218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerDied","Data":"898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69"} Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.622886 4760 generic.go:334] "Generic (PLEG): container finished" podID="24140731-e427-429e-a6cc-ad33f28eadb3" containerID="f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730" exitCode=0 Jan 21 16:07:04 crc kubenswrapper[4760]: I0121 16:07:04.622956 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerDied","Data":"f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730"} Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.574615 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:07:05 crc kubenswrapper[4760]: E0121 16:07:05.659074 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663389 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663775 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663809 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.663881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") pod \"24140731-e427-429e-a6cc-ad33f28eadb3\" (UID: \"24140731-e427-429e-a6cc-ad33f28eadb3\") " Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.670172 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.670410 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-4mwgm" event={"ID":"24140731-e427-429e-a6cc-ad33f28eadb3","Type":"ContainerDied","Data":"6a1d3db7a9078b67e10847c534f3aeb922e669c4fb474cc218aaf633d69560d0"} Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.670503 4760 scope.go:117] "RemoveContainer" containerID="f6ebe2583edd80f4105775726ca9ea906b253fc8f8788aa31635ce1de7544730" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.671128 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm" (OuterVolumeSpecName: "kube-api-access-g2clm") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "kube-api-access-g2clm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.729971 4760 scope.go:117] "RemoveContainer" containerID="bba3d3f5c39e63bbea59396fd0379c03d80941c17b1ee5ae5aa8abc9754a2304" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.755526 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config" (OuterVolumeSpecName: "config") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.768664 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.772580 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.772613 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.772623 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2clm\" (UniqueName: \"kubernetes.io/projected/24140731-e427-429e-a6cc-ad33f28eadb3-kube-api-access-g2clm\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.774752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.789198 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "24140731-e427-429e-a6cc-ad33f28eadb3" (UID: "24140731-e427-429e-a6cc-ad33f28eadb3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.874899 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:05 crc kubenswrapper[4760]: I0121 16:07:05.874957 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/24140731-e427-429e-a6cc-ad33f28eadb3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.009485 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.021538 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-4mwgm"] Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.047017 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.180974 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") pod \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.181070 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") pod \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.181534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") pod \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\" (UID: \"e272905b-28ec-4f49-8c51-f5c5d97c4a9d\") " Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.186590 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e272905b-28ec-4f49-8c51-f5c5d97c4a9d" (UID: "e272905b-28ec-4f49-8c51-f5c5d97c4a9d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.186923 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn" (OuterVolumeSpecName: "kube-api-access-crhwn") pod "e272905b-28ec-4f49-8c51-f5c5d97c4a9d" (UID: "e272905b-28ec-4f49-8c51-f5c5d97c4a9d"). InnerVolumeSpecName "kube-api-access-crhwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.208103 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e272905b-28ec-4f49-8c51-f5c5d97c4a9d" (UID: "e272905b-28ec-4f49-8c51-f5c5d97c4a9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.283716 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.283756 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crhwn\" (UniqueName: \"kubernetes.io/projected/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-kube-api-access-crhwn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.283770 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e272905b-28ec-4f49-8c51-f5c5d97c4a9d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.686960 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pgvwf" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.692560 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pgvwf" event={"ID":"e272905b-28ec-4f49-8c51-f5c5d97c4a9d","Type":"ContainerDied","Data":"e122d7d9f206fc81eb1c1fe0d60d94f65902f9b343f8a1406a38254a99f711ae"} Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.692648 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e122d7d9f206fc81eb1c1fe0d60d94f65902f9b343f8a1406a38254a99f711ae" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696451 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" containerID="cri-o://9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25" gracePeriod=30 Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696639 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" containerID="cri-o://cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f" gracePeriod=30 Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696671 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" containerID="cri-o://32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089" gracePeriod=30 Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.696634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerStarted","Data":"cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f"} Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.697169 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.934305 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86579fc786-9vmn6"] Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935119 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerName="barbican-db-sync" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935150 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerName="barbican-db-sync" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935179 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935189 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935200 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935209 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="init" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935272 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935283 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: E0121 16:07:06.935295 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935303 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935544 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d3b257-75ab-4b85-b13b-081bf5b4825e" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935605 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" containerName="barbican-db-sync" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.935626 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" containerName="dnsmasq-dns" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.937096 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.949269 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.949667 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.955298 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5wvcs" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997495 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-combined-ca-bundle\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997599 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data-custom\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997687 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6283023b-6e8b-4d25-b8e9-c0d91b08a913-logs\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.997777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2q7p\" (UniqueName: \"kubernetes.io/projected/6283023b-6e8b-4d25-b8e9-c0d91b08a913-kube-api-access-g2q7p\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:06 crc kubenswrapper[4760]: I0121 16:07:06.998043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:06.974728 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-757cdb9855-pfpj6"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.017497 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86579fc786-9vmn6"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.017650 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.027466 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.058495 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-757cdb9855-pfpj6"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.088402 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.090607 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-logs\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjmt\" (UniqueName: \"kubernetes.io/projected/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-kube-api-access-hpjmt\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-combined-ca-bundle\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100730 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100794 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-combined-ca-bundle\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100826 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data-custom\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.100926 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data-custom\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.101054 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6283023b-6e8b-4d25-b8e9-c0d91b08a913-logs\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.101084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2q7p\" (UniqueName: \"kubernetes.io/projected/6283023b-6e8b-4d25-b8e9-c0d91b08a913-kube-api-access-g2q7p\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.103671 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6283023b-6e8b-4d25-b8e9-c0d91b08a913-logs\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.118538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.124905 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-config-data-custom\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.125351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6283023b-6e8b-4d25-b8e9-c0d91b08a913-combined-ca-bundle\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.138108 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2q7p\" (UniqueName: \"kubernetes.io/projected/6283023b-6e8b-4d25-b8e9-c0d91b08a913-kube-api-access-g2q7p\") pod \"barbican-keystone-listener-86579fc786-9vmn6\" (UID: \"6283023b-6e8b-4d25-b8e9-c0d91b08a913\") " pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.138205 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.203656 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204360 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204434 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data-custom\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204569 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-logs\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204679 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204713 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjmt\" (UniqueName: \"kubernetes.io/projected/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-kube-api-access-hpjmt\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-combined-ca-bundle\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.204790 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.205903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-logs\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.210134 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data-custom\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.211882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-combined-ca-bundle\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.213551 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-config-data\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.225977 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjmt\" (UniqueName: \"kubernetes.io/projected/470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed-kube-api-access-hpjmt\") pod \"barbican-worker-757cdb9855-pfpj6\" (UID: \"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed\") " pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.306680 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.306957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.307171 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.308096 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.308769 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.308971 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.309413 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.310070 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.310934 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.311240 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.312102 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.329087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.335868 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.338160 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.354708 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.359558 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.364633 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"dnsmasq-dns-8647866847-sn996\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.367170 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-757cdb9855-pfpj6" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.416050 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.420682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421432 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421574 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.421766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524131 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524187 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524254 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524338 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.524426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.525851 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.535371 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.535664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.536547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.553919 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"barbican-api-5df994f884-hfwfn\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.643779 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24140731-e427-429e-a6cc-ad33f28eadb3" path="/var/lib/kubelet/pods/24140731-e427-429e-a6cc-ad33f28eadb3/volumes" Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.718016 4760 generic.go:334] "Generic (PLEG): container finished" podID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerID="598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4" exitCode=0 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.718108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerDied","Data":"598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730358 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerID="cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f" exitCode=0 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730405 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerID="32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089" exitCode=2 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730476 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.730516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.737636 4760 generic.go:334] "Generic (PLEG): container finished" podID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerID="ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a" exitCode=0 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.737705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerDied","Data":"ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a"} Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.796728 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:07 crc kubenswrapper[4760]: W0121 16:07:07.994915 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6283023b_6e8b_4d25_b8e9_c0d91b08a913.slice/crio-803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732 WatchSource:0}: Error finding container 803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732: Status 404 returned error can't find the container with id 803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732 Jan 21 16:07:07 crc kubenswrapper[4760]: I0121 16:07:07.996481 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86579fc786-9vmn6"] Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.125466 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-757cdb9855-pfpj6"] Jan 21 16:07:08 crc kubenswrapper[4760]: W0121 16:07:08.127561 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod470850c9_a1ed_4ea2_b7f1_b3bc6745b6ed.slice/crio-56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d WatchSource:0}: Error finding container 56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d: Status 404 returned error can't find the container with id 56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.216373 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:08 crc kubenswrapper[4760]: W0121 16:07:08.220467 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0223a6f4_1b74_490b_913d_9421094e5f35.slice/crio-20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df WatchSource:0}: Error finding container 20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df: Status 404 returned error can't find the container with id 20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.368914 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:08 crc kubenswrapper[4760]: W0121 16:07:08.371237 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4383f2f1_00d7_4c21_905a_944cd4f852fc.slice/crio-8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3 WatchSource:0}: Error finding container 8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3: Status 404 returned error can't find the container with id 8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3 Jan 21 16:07:08 crc kubenswrapper[4760]: E0121 16:07:08.721453 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8b68aa1_7489_4689_ad6b_8aa7149b9a67.slice/crio-9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0223a6f4_1b74_490b_913d_9421094e5f35.slice/crio-68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0223a6f4_1b74_490b_913d_9421094e5f35.slice/crio-conmon-68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8b68aa1_7489_4689_ad6b_8aa7149b9a67.slice/crio-conmon-9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.758059 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-757cdb9855-pfpj6" event={"ID":"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed","Type":"ContainerStarted","Data":"56be73219163a748eaf5a7129a0f3c12bd36886008f7415cc42020dc505b108d"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.760986 4760 generic.go:334] "Generic (PLEG): container finished" podID="0223a6f4-1b74-490b-913d-9421094e5f35" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" exitCode=0 Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.761062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerDied","Data":"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.761094 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerStarted","Data":"20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.774880 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerStarted","Data":"3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.774949 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerStarted","Data":"8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.777359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" event={"ID":"6283023b-6e8b-4d25-b8e9-c0d91b08a913","Type":"ContainerStarted","Data":"803d1a97461d0c15f98e30feca74f413e5ecb154545c422fdafec40b26530732"} Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.779988 4760 generic.go:334] "Generic (PLEG): container finished" podID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerID="9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25" exitCode=0 Jan 21 16:07:08 crc kubenswrapper[4760]: I0121 16:07:08.780267 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.074633 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.189116 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.191299 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.191781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192123 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192317 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192379 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192539 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192628 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.192664 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") pod \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\" (UID: \"a8b68aa1-7489-4689-ad6b-8aa7149b9a67\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.193746 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.193769 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.206516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts" (OuterVolumeSpecName: "scripts") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.212428 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz" (OuterVolumeSpecName: "kube-api-access-dxfdz") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "kube-api-access-dxfdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.292273 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.296653 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxfdz\" (UniqueName: \"kubernetes.io/projected/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-kube-api-access-dxfdz\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.296690 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.296702 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.306552 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.399582 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.421389 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data" (OuterVolumeSpecName: "config-data") pod "a8b68aa1-7489-4689-ad6b-8aa7149b9a67" (UID: "a8b68aa1-7489-4689-ad6b-8aa7149b9a67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.501769 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8b68aa1-7489-4689-ad6b-8aa7149b9a67-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.534400 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606717 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606949 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.606990 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.607029 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.607151 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") pod \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\" (UID: \"3bf0e00e-fc38-45a9-8615-dd5398ed1209\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.670350 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.672689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.688849 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww" (OuterVolumeSpecName: "kube-api-access-nt8ww") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "kube-api-access-nt8ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.699975 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts" (OuterVolumeSpecName: "scripts") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709285 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bf0e00e-fc38-45a9-8615-dd5398ed1209-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709352 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt8ww\" (UniqueName: \"kubernetes.io/projected/3bf0e00e-fc38-45a9-8615-dd5398ed1209-kube-api-access-nt8ww\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709368 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.709379 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.731823 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data" (OuterVolumeSpecName: "config-data") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.740600 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.742930 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bf0e00e-fc38-45a9-8615-dd5398ed1209" (UID: "3bf0e00e-fc38-45a9-8615-dd5398ed1209"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811071 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811148 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811193 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.811366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") pod \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\" (UID: \"c4fdfaae-d8ad-46d6-b30a-1b671408ca51\") " Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.812491 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.812553 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf0e00e-fc38-45a9-8615-dd5398ed1209-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.830760 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9" (OuterVolumeSpecName: "kube-api-access-zdqk9") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "kube-api-access-zdqk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.831445 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.839253 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8b68aa1-7489-4689-ad6b-8aa7149b9a67","Type":"ContainerDied","Data":"67106a7322a6efcc713f80439a85e4ab5666dd6671b1f1903ab8d4cfe53081b5"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.839364 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.839402 4760 scope.go:117] "RemoveContainer" containerID="cef94343eaae73039a287ce5f2e8e733ae8f53526605a54ba84af6fd34b8833f" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.847133 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nf7wp" event={"ID":"c4fdfaae-d8ad-46d6-b30a-1b671408ca51","Type":"ContainerDied","Data":"3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.847192 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e783af74f6b9f49b3e2f4151440e443595611d84b2a228004ee9415a684f5ab" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.847357 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nf7wp" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.854609 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-j76bd" event={"ID":"3bf0e00e-fc38-45a9-8615-dd5398ed1209","Type":"ContainerDied","Data":"af5ddb7d0cdc80be37d99d40d9448dcfd4fc35785a84df5df1d392f0ec375992"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.854687 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5ddb7d0cdc80be37d99d40d9448dcfd4fc35785a84df5df1d392f0ec375992" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.854726 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-j76bd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.863076 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.869787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerStarted","Data":"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.870821 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.883488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerStarted","Data":"9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b"} Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.884662 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.884699 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.885960 4760 scope.go:117] "RemoveContainer" containerID="32ffc3ef3b4941d1f85287b3917cf08bba8a482a2d1cac30e443cf2335d64089" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.924056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data" (OuterVolumeSpecName: "config-data") pod "c4fdfaae-d8ad-46d6-b30a-1b671408ca51" (UID: "c4fdfaae-d8ad-46d6-b30a-1b671408ca51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942550 4760 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942646 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942665 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdqk9\" (UniqueName: \"kubernetes.io/projected/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-kube-api-access-zdqk9\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.942680 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4fdfaae-d8ad-46d6-b30a-1b671408ca51-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.950214 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.959450 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.978899 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979430 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerName="cinder-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979452 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerName="cinder-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979483 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979490 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979497 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979505 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979519 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979527 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" Jan 21 16:07:09 crc kubenswrapper[4760]: E0121 16:07:09.979551 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerName="glance-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979559 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerName="glance-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979758 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" containerName="cinder-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979778 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="ceilometer-notification-agent" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979886 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" containerName="glance-db-sync" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979912 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="proxy-httpd" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.979924 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" containerName="sg-core" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.986696 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8647866847-sn996" podStartSLOduration=2.986660442 podStartE2EDuration="2.986660442s" podCreationTimestamp="2026-01-21 16:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:09.924564297 +0000 UTC m=+1200.592333905" watchObservedRunningTime="2026-01-21 16:07:09.986660442 +0000 UTC m=+1200.654430020" Jan 21 16:07:09 crc kubenswrapper[4760]: I0121 16:07:09.990796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.003034 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.004062 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.004128 4760 scope.go:117] "RemoveContainer" containerID="9d700a5c7ac14932980f3741c7873a20c30a9d2f8d3054dc9aae795c4000fb25" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.017285 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.025995 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5df994f884-hfwfn" podStartSLOduration=3.025961312 podStartE2EDuration="3.025961312s" podCreationTimestamp="2026-01-21 16:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:09.973577209 +0000 UTC m=+1200.641346787" watchObservedRunningTime="2026-01-21 16:07:10.025961312 +0000 UTC m=+1200.693730890" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051041 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051204 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051353 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.051863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.052009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.052174 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.125610 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.131846 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141367 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141616 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141791 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.141907 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-l8crm" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.163310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.163609 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.163771 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164000 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164369 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.164577 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.166596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.166792 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.170018 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.189206 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.203540 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.209630 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.211789 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.216367 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.228738 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"ceilometer-0\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.245151 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.247234 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268272 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268385 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268436 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268469 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.268603 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.277373 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.354848 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371261 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371340 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371384 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371409 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371440 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371461 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371517 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371640 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.371682 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.373545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.396643 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.404780 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.405791 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.406294 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.411998 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.446393 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.449583 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.453419 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.470858 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473623 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473719 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473775 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.473858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.474995 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.475118 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.475886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.476076 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.477432 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.502135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"dnsmasq-dns-6978f7d9ff-2dmrj\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.559489 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577314 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577392 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577425 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577510 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.577667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679802 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679843 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.679918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.680033 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.680072 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.680093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.681842 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.682603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.685652 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.685929 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.688387 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.688455 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.693020 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.703618 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"cinder-api-0\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " pod="openstack/cinder-api-0" Jan 21 16:07:10 crc kubenswrapper[4760]: I0121 16:07:10.881075 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.226108 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.271186 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.275810 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296220 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296447 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296474 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296567 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296615 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.296652 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.314821 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.398957 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.401294 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.401896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.401971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.402172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.402305 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.403011 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.400771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.404848 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.405103 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.405108 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.445054 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"dnsmasq-dns-6578955fd5-v57lc\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.616067 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.633439 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8b68aa1-7489-4689-ad6b-8aa7149b9a67" path="/var/lib/kubelet/pods/a8b68aa1-7489-4689-ad6b-8aa7149b9a67/volumes" Jan 21 16:07:11 crc kubenswrapper[4760]: I0121 16:07:11.935969 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8647866847-sn996" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" containerID="cri-o://9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" gracePeriod=10 Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.080981 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.082794 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.089786 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.089962 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.090395 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2lr4r" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.103922 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.125826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126093 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126201 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126310 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126802 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.126949 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.231489 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.231993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232065 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232129 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232185 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.232226 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.236026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.236875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.237523 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.249999 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.262157 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.283476 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.284757 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.420940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.487734 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.490674 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.496192 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.525850 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.568955 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768244 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.768927 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.769048 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.769161 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.769247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870613 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870689 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870720 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870784 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870874 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.870965 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.871030 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.871662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.871956 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.875195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.900386 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.905876 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.913985 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.923967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.923995 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.961735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.963411 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.964990 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:12 crc kubenswrapper[4760]: W0121 16:07:12.968413 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae71bb0_4c04_47db_a201_a172da79df7f.slice/crio-c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7 WatchSource:0}: Error finding container c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7: Status 404 returned error can't find the container with id c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7 Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.971228 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-757cdb9855-pfpj6" event={"ID":"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed","Type":"ContainerStarted","Data":"da271b59be81fcfa5db51c9691c68ffa654249bff4940b33ed98cc8124c2ed02"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978491 4760 generic.go:334] "Generic (PLEG): container finished" podID="0223a6f4-1b74-490b-913d-9421094e5f35" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" exitCode=0 Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978589 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerDied","Data":"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978635 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8647866847-sn996" event={"ID":"0223a6f4-1b74-490b-913d-9421094e5f35","Type":"ContainerDied","Data":"20e65130ae9d42662dfae80cbec2051400d4c3fed0c8cc2bf296af6c3b1272df"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978655 4760 scope.go:117] "RemoveContainer" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.978860 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8647866847-sn996" Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.992356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" event={"ID":"6283023b-6e8b-4d25-b8e9-c0d91b08a913","Type":"ContainerStarted","Data":"88f1d73e26938285d10c86d7892247190a424dba2dc588df3bf5091970f24264"} Jan 21 16:07:12 crc kubenswrapper[4760]: I0121 16:07:12.993821 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerStarted","Data":"32fcd1b60d2d86a5acc94c6a6bc2f951249985413b27086c207699de1e47a6c2"} Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.028999 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.062216 4760 scope.go:117] "RemoveContainer" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.075959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076056 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076169 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076346 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.076393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") pod \"0223a6f4-1b74-490b-913d-9421094e5f35\" (UID: \"0223a6f4-1b74-490b-913d-9421094e5f35\") " Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.119824 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn" (OuterVolumeSpecName: "kube-api-access-cr5xn") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "kube-api-access-cr5xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.139283 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.178430 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.185582 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr5xn\" (UniqueName: \"kubernetes.io/projected/0223a6f4-1b74-490b-913d-9421094e5f35-kube-api-access-cr5xn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.274420 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:13 crc kubenswrapper[4760]: W0121 16:07:13.343853 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ee8909_527a_4a4d_a04c_9a401c551a6d.slice/crio-4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0 WatchSource:0}: Error finding container 4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0: Status 404 returned error can't find the container with id 4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0 Jan 21 16:07:13 crc kubenswrapper[4760]: W0121 16:07:13.355567 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode76b744a_9845_4295_80c1_eb276462b45f.slice/crio-3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97 WatchSource:0}: Error finding container 3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97: Status 404 returned error can't find the container with id 3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97 Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.444802 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config" (OuterVolumeSpecName: "config") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.461921 4760 scope.go:117] "RemoveContainer" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" Jan 21 16:07:13 crc kubenswrapper[4760]: E0121 16:07:13.463020 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b\": container with ID starting with 9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b not found: ID does not exist" containerID="9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.463066 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b"} err="failed to get container status \"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b\": rpc error: code = NotFound desc = could not find container \"9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b\": container with ID starting with 9b987d4480d7463c9fbb398ca5e064fe46e718548bea825d0b551bd4b2c0635b not found: ID does not exist" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.463114 4760 scope.go:117] "RemoveContainer" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" Jan 21 16:07:13 crc kubenswrapper[4760]: E0121 16:07:13.464990 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b\": container with ID starting with 68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b not found: ID does not exist" containerID="68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.465017 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b"} err="failed to get container status \"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b\": rpc error: code = NotFound desc = could not find container \"68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b\": container with ID starting with 68ff03dcd7cd33dee67a72eb48fc0104ce8d08489c5e991c768bed3e2c728a3b not found: ID does not exist" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.483554 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.506608 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.506653 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.539670 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.625743 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.781778 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.782479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0223a6f4-1b74-490b-913d-9421094e5f35" (UID: "0223a6f4-1b74-490b-913d-9421094e5f35"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.792043 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.792796 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:13 crc kubenswrapper[4760]: I0121 16:07:13.792836 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0223a6f4-1b74-490b-913d-9421094e5f35-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.013519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" event={"ID":"2875e741-9553-4d41-9658-0128dfe5d27e","Type":"ContainerStarted","Data":"94dab4a95065036cda3eb8b300313199e84492cd089a8a1baa84f978a496f368"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.018989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerStarted","Data":"4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.023462 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.024963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerStarted","Data":"167f5e6cbc5c5aaaee43119cbd518f2d249a5f26082afdbbd363faba3f70c837"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.039450 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.056316 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-757cdb9855-pfpj6" event={"ID":"470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed","Type":"ContainerStarted","Data":"bf8d4555023be0cd875cfb258f631e1175a1ad2fa3795dc49bf6a4c44d99547b"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.064543 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8647866847-sn996"] Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.072787 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerStarted","Data":"3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97"} Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.090180 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-757cdb9855-pfpj6" podStartSLOduration=4.436362419 podStartE2EDuration="8.090149203s" podCreationTimestamp="2026-01-21 16:07:06 +0000 UTC" firstStartedPulling="2026-01-21 16:07:08.135848004 +0000 UTC m=+1198.803617582" lastFinishedPulling="2026-01-21 16:07:11.789634788 +0000 UTC m=+1202.457404366" observedRunningTime="2026-01-21 16:07:14.085463121 +0000 UTC m=+1204.753232699" watchObservedRunningTime="2026-01-21 16:07:14.090149203 +0000 UTC m=+1204.757918781" Jan 21 16:07:14 crc kubenswrapper[4760]: I0121 16:07:14.287921 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.099842 4760 generic.go:334] "Generic (PLEG): container finished" podID="2875e741-9553-4d41-9658-0128dfe5d27e" containerID="cd8443123971ba0b3ce41fbd9ef0e389de9463ff1d82fc16fdf7007618663891" exitCode=0 Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.100461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" event={"ID":"2875e741-9553-4d41-9658-0128dfe5d27e","Type":"ContainerDied","Data":"cd8443123971ba0b3ce41fbd9ef0e389de9463ff1d82fc16fdf7007618663891"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.134559 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" event={"ID":"6283023b-6e8b-4d25-b8e9-c0d91b08a913","Type":"ContainerStarted","Data":"2caca31ab37768e1b1335a139869a7dd8bc5973a5245009dd4db4e226e0ab773"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.143975 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerStarted","Data":"84ffbaee0356a805927dcff2ec8cddd93e978de0cb519bdee312b38c0311df82"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.167972 4760 generic.go:334] "Generic (PLEG): container finished" podID="28ae7881-d794-4020-ae6d-a192927d75c8" containerID="d1c1490964aac721fec04529370c88c8f8ac1caecbb735bae40378ec6315de8a" exitCode=0 Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.169421 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerDied","Data":"d1c1490964aac721fec04529370c88c8f8ac1caecbb735bae40378ec6315de8a"} Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.255393 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86579fc786-9vmn6" podStartSLOduration=5.450881275 podStartE2EDuration="9.255369614s" podCreationTimestamp="2026-01-21 16:07:06 +0000 UTC" firstStartedPulling="2026-01-21 16:07:07.997607567 +0000 UTC m=+1198.665377145" lastFinishedPulling="2026-01-21 16:07:11.802095906 +0000 UTC m=+1202.469865484" observedRunningTime="2026-01-21 16:07:15.194868617 +0000 UTC m=+1205.862638195" watchObservedRunningTime="2026-01-21 16:07:15.255369614 +0000 UTC m=+1205.923139192" Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.680552 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" path="/var/lib/kubelet/pods/0223a6f4-1b74-490b-913d-9421094e5f35/volumes" Jan 21 16:07:15 crc kubenswrapper[4760]: I0121 16:07:15.997567 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123263 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123467 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123512 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123615 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.123715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") pod \"2875e741-9553-4d41-9658-0128dfe5d27e\" (UID: \"2875e741-9553-4d41-9658-0128dfe5d27e\") " Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.201707 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx" (OuterVolumeSpecName: "kube-api-access-vxhkx") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "kube-api-access-vxhkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.224531 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539"} Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.225637 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxhkx\" (UniqueName: \"kubernetes.io/projected/2875e741-9553-4d41-9658-0128dfe5d27e-kube-api-access-vxhkx\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.252729 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" event={"ID":"2875e741-9553-4d41-9658-0128dfe5d27e","Type":"ContainerDied","Data":"94dab4a95065036cda3eb8b300313199e84492cd089a8a1baa84f978a496f368"} Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.252804 4760 scope.go:117] "RemoveContainer" containerID="cd8443123971ba0b3ce41fbd9ef0e389de9463ff1d82fc16fdf7007618663891" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.252981 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6978f7d9ff-2dmrj" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.277231 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerStarted","Data":"4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c"} Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.318183 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.364892 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.432759 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.478251 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.479108 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.536511 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.536559 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.542435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config" (OuterVolumeSpecName: "config") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.585426 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2875e741-9553-4d41-9658-0128dfe5d27e" (UID: "2875e741-9553-4d41-9658-0128dfe5d27e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.637868 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:16 crc kubenswrapper[4760]: I0121 16:07:16.637898 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2875e741-9553-4d41-9658-0128dfe5d27e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.291448 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5c89c5dbb6-sspr9"] Jan 21 16:07:17 crc kubenswrapper[4760]: E0121 16:07:17.293269 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293293 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" Jan 21 16:07:17 crc kubenswrapper[4760]: E0121 16:07:17.293345 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293352 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: E0121 16:07:17.293367 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293450 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293729 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0223a6f4-1b74-490b-913d-9421094e5f35" containerName="dnsmasq-dns" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.293751 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" containerName="init" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.296200 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.302782 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.302782 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.306670 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c89c5dbb6-sspr9"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.334717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerStarted","Data":"d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358383 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data-custom\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358483 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358531 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrq2\" (UniqueName: \"kubernetes.io/projected/78418f27-9273-42a4-aaa2-74edfcd10ef1-kube-api-access-7lrq2\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-public-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78418f27-9273-42a4-aaa2-74edfcd10ef1-logs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358612 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-internal-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.358638 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-combined-ca-bundle\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.393705 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerStarted","Data":"ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.394303 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.413788 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerStarted","Data":"d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.429018 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" podStartSLOduration=6.428989705 podStartE2EDuration="6.428989705s" podCreationTimestamp="2026-01-21 16:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:17.423873722 +0000 UTC m=+1208.091643320" watchObservedRunningTime="2026-01-21 16:07:17.428989705 +0000 UTC m=+1208.096759293" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.436218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerStarted","Data":"ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef"} Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data-custom\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465131 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrq2\" (UniqueName: \"kubernetes.io/projected/78418f27-9273-42a4-aaa2-74edfcd10ef1-kube-api-access-7lrq2\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465230 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-public-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78418f27-9273-42a4-aaa2-74edfcd10ef1-logs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465293 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-internal-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.465379 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-combined-ca-bundle\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.478889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78418f27-9273-42a4-aaa2-74edfcd10ef1-logs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.499361 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-combined-ca-bundle\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.506980 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrq2\" (UniqueName: \"kubernetes.io/projected/78418f27-9273-42a4-aaa2-74edfcd10ef1-kube-api-access-7lrq2\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.507991 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.524351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-internal-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.530727 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-config-data-custom\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.557004 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.574347 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/78418f27-9273-42a4-aaa2-74edfcd10ef1-public-tls-certs\") pod \"barbican-api-5c89c5dbb6-sspr9\" (UID: \"78418f27-9273-42a4-aaa2-74edfcd10ef1\") " pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.601577 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6978f7d9ff-2dmrj"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.681685 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2875e741-9553-4d41-9658-0128dfe5d27e" path="/var/lib/kubelet/pods/2875e741-9553-4d41-9658-0128dfe5d27e/volumes" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.748117 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.759087 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:17 crc kubenswrapper[4760]: I0121 16:07:17.854629 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.223778 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.223863 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.225812 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f"} pod="openstack/horizon-789c75ff48-s7f9p" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.225857 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" containerID="cri-o://c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.234622 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.234683 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.236954 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4"} pod="openstack/horizon-5c9896dc76-gwrzv" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.237003 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" containerID="cri-o://612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.568232 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" containerID="cri-o://4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.568674 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerStarted","Data":"cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc"} Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.569392 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" containerID="cri-o://cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc" gracePeriod=30 Jan 21 16:07:18 crc kubenswrapper[4760]: I0121 16:07:18.644979 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.64495262 podStartE2EDuration="7.64495262s" podCreationTimestamp="2026-01-21 16:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:18.630556125 +0000 UTC m=+1209.298325713" watchObservedRunningTime="2026-01-21 16:07:18.64495262 +0000 UTC m=+1209.312722198" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.110115 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.124870 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.286652 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.426206 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5c89c5dbb6-sspr9"] Jan 21 16:07:19 crc kubenswrapper[4760]: E0121 16:07:19.493868 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73aff760_e303_42c4_b30b_cd8062dbb12f.slice/crio-conmon-cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.597517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c89c5dbb6-sspr9" event={"ID":"78418f27-9273-42a4-aaa2-74edfcd10ef1","Type":"ContainerStarted","Data":"fcf6508eea74f67db0f0b6559a8c8abc5d6a6f0915c5cc81ee0c14c5f1cdf624"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.623936 4760 generic.go:334] "Generic (PLEG): container finished" podID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerID="cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc" exitCode=143 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.624033 4760 generic.go:334] "Generic (PLEG): container finished" podID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerID="4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c" exitCode=143 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.655718 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerDied","Data":"cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.655769 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerDied","Data":"4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.661619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerStarted","Data":"9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e"} Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664226 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" containerID="cri-o://d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1" gracePeriod=30 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664522 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.664557 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" containerID="cri-o://9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e" gracePeriod=30 Jan 21 16:07:19 crc kubenswrapper[4760]: I0121 16:07:19.913891 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.91386892 podStartE2EDuration="9.91386892s" podCreationTimestamp="2026-01-21 16:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:19.913805669 +0000 UTC m=+1210.581575247" watchObservedRunningTime="2026-01-21 16:07:19.91386892 +0000 UTC m=+1210.581638498" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.080981 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129285 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129414 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129468 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129539 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129648 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.129728 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") pod \"73aff760-e303-42c4-b30b-cd8062dbb12f\" (UID: \"73aff760-e303-42c4-b30b-cd8062dbb12f\") " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.130782 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.131460 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs" (OuterVolumeSpecName: "logs") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.152571 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.154749 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz" (OuterVolumeSpecName: "kube-api-access-68qwz") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "kube-api-access-68qwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.159646 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts" (OuterVolumeSpecName: "scripts") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.223668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.235263 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.235308 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68qwz\" (UniqueName: \"kubernetes.io/projected/73aff760-e303-42c4-b30b-cd8062dbb12f-kube-api-access-68qwz\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.235336 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.237068 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.237137 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73aff760-e303-42c4-b30b-cd8062dbb12f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.237155 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.270604 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data" (OuterVolumeSpecName: "config-data") pod "73aff760-e303-42c4-b30b-cd8062dbb12f" (UID: "73aff760-e303-42c4-b30b-cd8062dbb12f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.282167 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.338493 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73aff760-e303-42c4-b30b-cd8062dbb12f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.338523 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.709918 4760 generic.go:334] "Generic (PLEG): container finished" podID="e76b744a-9845-4295-80c1-eb276462b45f" containerID="d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1" exitCode=143 Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.710062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerDied","Data":"d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.722026 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c89c5dbb6-sspr9" event={"ID":"78418f27-9273-42a4-aaa2-74edfcd10ef1","Type":"ContainerStarted","Data":"f6a59cd4f3951207aa822c4fc0314437f246f48e8e8a94eb68375b332c1adda8"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.724092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerStarted","Data":"bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.733098 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerStarted","Data":"f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.733496 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" containerID="cri-o://d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf" gracePeriod=30 Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.733500 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" containerID="cri-o://f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f" gracePeriod=30 Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.743217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"73aff760-e303-42c4-b30b-cd8062dbb12f","Type":"ContainerDied","Data":"167f5e6cbc5c5aaaee43119cbd518f2d249a5f26082afdbbd363faba3f70c837"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.743407 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.743298 4760 scope.go:117] "RemoveContainer" containerID="cc78722a1f59c5151ea09efb4b0c25ee7401f8d6d233fcc10317fa4b322394bc" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.763493 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.971698894 podStartE2EDuration="10.763470942s" podCreationTimestamp="2026-01-21 16:07:10 +0000 UTC" firstStartedPulling="2026-01-21 16:07:13.39099486 +0000 UTC m=+1204.058764438" lastFinishedPulling="2026-01-21 16:07:15.182766908 +0000 UTC m=+1205.850536486" observedRunningTime="2026-01-21 16:07:20.762200472 +0000 UTC m=+1211.429970050" watchObservedRunningTime="2026-01-21 16:07:20.763470942 +0000 UTC m=+1211.431240520" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.767100 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767"} Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.793301 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.793272055 podStartE2EDuration="9.793272055s" podCreationTimestamp="2026-01-21 16:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:20.791722558 +0000 UTC m=+1211.459492136" watchObservedRunningTime="2026-01-21 16:07:20.793272055 +0000 UTC m=+1211.461041633" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.863428 4760 scope.go:117] "RemoveContainer" containerID="4c1683276e732fc325242144cbaf63aad9bacde6e82f6d6369d36ddab35c747c" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.878923 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.928682 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.952521 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.952606 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.982666 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:20 crc kubenswrapper[4760]: E0121 16:07:20.983358 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.983386 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" Jan 21 16:07:20 crc kubenswrapper[4760]: E0121 16:07:20.983447 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" Jan 21 16:07:20 crc kubenswrapper[4760]: I0121 16:07:20.983458 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.001971 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-log" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.002050 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" containerName="glance-httpd" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.003484 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.006501 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.011815 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.011996 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.190954 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191674 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191846 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191913 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.191959 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.192018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.192072 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.192115 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295828 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.295974 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.296004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.296067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.296092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.300587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.300875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.301156 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.312510 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.315135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.317963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.318065 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c6778d77f-gkzrk" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.323299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.323309 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.352196 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.452617 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.452917 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64f66997d8-wj49l" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" containerID="cri-o://a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57" gracePeriod=30 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.453504 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64f66997d8-wj49l" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" containerID="cri-o://32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9" gracePeriod=30 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.623484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.653713 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73aff760-e303-42c4-b30b-cd8062dbb12f" path="/var/lib/kubelet/pods/73aff760-e303-42c4-b30b-cd8062dbb12f/volumes" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.654630 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.662400 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.770987 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.771396 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" containerID="cri-o://7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659" gracePeriod=10 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.913488 4760 generic.go:334] "Generic (PLEG): container finished" podID="91fc26b9-373e-446b-8345-eae2740aac66" containerID="32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9" exitCode=0 Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.913612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerDied","Data":"32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9"} Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.977195 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5c89c5dbb6-sspr9" event={"ID":"78418f27-9273-42a4-aaa2-74edfcd10ef1","Type":"ContainerStarted","Data":"86cdd043d8ca6e86b18c42b7a29247cb973bcb42d4e6998b94fe8412c4463c04"} Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.977516 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:21 crc kubenswrapper[4760]: I0121 16:07:21.977568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012061 4760 generic.go:334] "Generic (PLEG): container finished" podID="db57e542-32cc-4256-a057-0b37b35cdc24" containerID="f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f" exitCode=0 Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012684 4760 generic.go:334] "Generic (PLEG): container finished" podID="db57e542-32cc-4256-a057-0b37b35cdc24" containerID="d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf" exitCode=143 Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012810 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerDied","Data":"f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f"} Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.012858 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerDied","Data":"d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf"} Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.047199 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5c89c5dbb6-sspr9" podStartSLOduration=5.047166727 podStartE2EDuration="5.047166727s" podCreationTimestamp="2026-01-21 16:07:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:22.04686034 +0000 UTC m=+1212.714629938" watchObservedRunningTime="2026-01-21 16:07:22.047166727 +0000 UTC m=+1212.714936305" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.238402 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353102 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353195 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353220 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353252 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353275 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353316 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.353357 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") pod \"db57e542-32cc-4256-a057-0b37b35cdc24\" (UID: \"db57e542-32cc-4256-a057-0b37b35cdc24\") " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.359272 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs" (OuterVolumeSpecName: "logs") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.361462 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.375044 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn" (OuterVolumeSpecName: "kube-api-access-4hgnn") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "kube-api-access-4hgnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.379366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.382539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts" (OuterVolumeSpecName: "scripts") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.461587 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464603 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464656 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hgnn\" (UniqueName: \"kubernetes.io/projected/db57e542-32cc-4256-a057-0b37b35cdc24-kube-api-access-4hgnn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464674 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db57e542-32cc-4256-a057-0b37b35cdc24-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464711 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.464723 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.510172 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.530060 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.568155 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.568720 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.667945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data" (OuterVolumeSpecName: "config-data") pod "db57e542-32cc-4256-a057-0b37b35cdc24" (UID: "db57e542-32cc-4256-a057-0b37b35cdc24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.685531 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db57e542-32cc-4256-a057-0b37b35cdc24-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:22 crc kubenswrapper[4760]: I0121 16:07:22.977505 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: W0121 16:07:23.109253 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53329917_a467_4919_b5ad_170f6fa50655.slice/crio-0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c WatchSource:0}: Error finding container 0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c: Status 404 returned error can't find the container with id 0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.142833 4760 generic.go:334] "Generic (PLEG): container finished" podID="13f413eb-0ded-492d-83fa-5d255f83b266" containerID="7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659" exitCode=0 Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.143255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerDied","Data":"7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.154659 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.184900 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.185720 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"db57e542-32cc-4256-a057-0b37b35cdc24","Type":"ContainerDied","Data":"84ffbaee0356a805927dcff2ec8cddd93e978de0cb519bdee312b38c0311df82"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.185806 4760 scope.go:117] "RemoveContainer" containerID="f4c8098cb73e0c8d08b5338f1af14e560cbaed635d232c122e95b4d641cd920f" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.262248 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ce8d17c-d046-45b5-9136-6faca838de63" containerID="c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f" exitCode=0 Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.262422 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307308 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307431 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307526 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307569 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307750 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.307918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") pod \"13f413eb-0ded-492d-83fa-5d255f83b266\" (UID: \"13f413eb-0ded-492d-83fa-5d255f83b266\") " Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.309241 4760 scope.go:117] "RemoveContainer" containerID="d2359bf23bb44bc19fa6db8e7df888d2fa478aac6b689e60cf298b8a3695e1cf" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.350226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerStarted","Data":"ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1"} Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.351155 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.410866 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.483767 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.504633 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn" (OuterVolumeSpecName: "kube-api-access-fnlkn") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "kube-api-access-fnlkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.516572 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517205 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517219 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517249 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="init" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517255 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="init" Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517279 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517287 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" Jan 21 16:07:23 crc kubenswrapper[4760]: E0121 16:07:23.517310 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517316 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517552 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-log" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517583 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" containerName="dnsmasq-dns" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.517598 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" containerName="glance-httpd" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.518845 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.525989 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.526244 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.564376 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnlkn\" (UniqueName: \"kubernetes.io/projected/13f413eb-0ded-492d-83fa-5d255f83b266-kube-api-access-fnlkn\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.582516 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.606016 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.732144478 podStartE2EDuration="14.605984232s" podCreationTimestamp="2026-01-21 16:07:09 +0000 UTC" firstStartedPulling="2026-01-21 16:07:13.008093311 +0000 UTC m=+1203.675862889" lastFinishedPulling="2026-01-21 16:07:21.881933065 +0000 UTC m=+1212.549702643" observedRunningTime="2026-01-21 16:07:23.42110164 +0000 UTC m=+1214.088871238" watchObservedRunningTime="2026-01-21 16:07:23.605984232 +0000 UTC m=+1214.273753810" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667275 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667374 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667476 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667500 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667524 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667597 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.667669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.675951 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db57e542-32cc-4256-a057-0b37b35cdc24" path="/var/lib/kubelet/pods/db57e542-32cc-4256-a057-0b37b35cdc24/volumes" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.773460 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774119 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774846 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.774911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.775096 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.775268 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.784781 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.785279 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.807512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.807658 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.820544 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.835454 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.849146 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.855030 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.870468 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.870664 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.877068 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.877106 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:23 crc kubenswrapper[4760]: I0121 16:07:23.979763 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.006256 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.040437 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.085017 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.085058 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.093157 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config" (OuterVolumeSpecName: "config") pod "13f413eb-0ded-492d-83fa-5d255f83b266" (UID: "13f413eb-0ded-492d-83fa-5d255f83b266"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.186740 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f413eb-0ded-492d-83fa-5d255f83b266-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.220844 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.248949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.369129 4760 generic.go:334] "Generic (PLEG): container finished" podID="91fc26b9-373e-446b-8345-eae2740aac66" containerID="a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57" exitCode=0 Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.369279 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerDied","Data":"a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.379892 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" event={"ID":"13f413eb-0ded-492d-83fa-5d255f83b266","Type":"ContainerDied","Data":"8919eb03095eb42378f58031dd0adc0256195d0b5fc9458c192795f5f1457bd7"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.379958 4760 scope.go:117] "RemoveContainer" containerID="7738982acb0c3ed317e4b86401b4020cc1a7e9ffdb58ad3f4a19d26d9d619659" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.380003 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79cd4f6685-krlfc" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.415594 4760 generic.go:334] "Generic (PLEG): container finished" podID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerID="612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4" exitCode=0 Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.415706 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerDied","Data":"612f54d4a291cca7d313cb4a4bbb528e2bf6ea9e286a50f6261fde65a890e7b4"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.415739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c9896dc76-gwrzv" event={"ID":"0e7e96ce-a64f-4a21-97e1-b2ebabc7e236","Type":"ContainerStarted","Data":"f2ec1df66de3eceac5c6625abdf5ed733c41f0d7c9b51a846c85bbfff9dd22f4"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.425411 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerStarted","Data":"b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.446174 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerStarted","Data":"0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c"} Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.451841 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.460583 4760 scope.go:117] "RemoveContainer" containerID="a098324835da34928446d54bd96c4c7824059772f43d6b1feb5b03ad7acd1d1f" Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.478703 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79cd4f6685-krlfc"] Jan 21 16:07:24 crc kubenswrapper[4760]: I0121 16:07:24.945115 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116129 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116296 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116405 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.116603 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") pod \"91fc26b9-373e-446b-8345-eae2740aac66\" (UID: \"91fc26b9-373e-446b-8345-eae2740aac66\") " Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.123557 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8" (OuterVolumeSpecName: "kube-api-access-7blz8") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "kube-api-access-7blz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.133214 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.246055 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-65c954fbbd-tb9kj" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.247740 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7blz8\" (UniqueName: \"kubernetes.io/projected/91fc26b9-373e-446b-8345-eae2740aac66-kube-api-access-7blz8\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.247770 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.264832 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.275529 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.313642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config" (OuterVolumeSpecName: "config") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.349575 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "91fc26b9-373e-446b-8345-eae2740aac66" (UID: "91fc26b9-373e-446b-8345-eae2740aac66"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.367599 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.367926 4760 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.368020 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/91fc26b9-373e-446b-8345-eae2740aac66-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.536544 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerStarted","Data":"7a1e65b81fdc2bf2cd5f8d28a919b3bf01a637dbd30db0eacf7ef6d62e351489"} Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.564265 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerStarted","Data":"fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd"} Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.584508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64f66997d8-wj49l" event={"ID":"91fc26b9-373e-446b-8345-eae2740aac66","Type":"ContainerDied","Data":"4fc62016048339323c931fffa3603e72f7a62c9ebd94588ca399a9bb3da45b78"} Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.584604 4760 scope.go:117] "RemoveContainer" containerID="32cbd81036a54e4a85f4ae140c458d0e2c2f50eba42c122a4f19ffe03fee4df9" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.584861 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64f66997d8-wj49l" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.640688 4760 scope.go:117] "RemoveContainer" containerID="a986ebbb2f6e292f71ba54ba9ca8d611dbceabf2b8f7e56bf383fb4b2edc0c57" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.660940 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f413eb-0ded-492d-83fa-5d255f83b266" path="/var/lib/kubelet/pods/13f413eb-0ded-492d-83fa-5d255f83b266/volumes" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.703680 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.705676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:07:25 crc kubenswrapper[4760]: I0121 16:07:25.714634 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64f66997d8-wj49l"] Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.032222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.132197 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5b497869f9-hs8kf" Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.587937 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerStarted","Data":"42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1"} Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.590695 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerStarted","Data":"050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f"} Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.651463 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.651432746 podStartE2EDuration="6.651432746s" podCreationTimestamp="2026-01-21 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:26.62111265 +0000 UTC m=+1217.288882228" watchObservedRunningTime="2026-01-21 16:07:26.651432746 +0000 UTC m=+1217.319202314" Jan 21 16:07:26 crc kubenswrapper[4760]: I0121 16:07:26.698760 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.623061 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" containerID="cri-o://ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef" gracePeriod=30 Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.623151 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" containerID="cri-o://bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30" gracePeriod=30 Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.638336 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fc26b9-373e-446b-8345-eae2740aac66" path="/var/lib/kubelet/pods/91fc26b9-373e-446b-8345-eae2740aac66/volumes" Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.639364 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerStarted","Data":"ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2"} Jan 21 16:07:27 crc kubenswrapper[4760]: I0121 16:07:27.660246 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.660215245 podStartE2EDuration="4.660215245s" podCreationTimestamp="2026-01-21 16:07:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:27.655447111 +0000 UTC m=+1218.323216689" watchObservedRunningTime="2026-01-21 16:07:27.660215245 +0000 UTC m=+1218.327984823" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.759982 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.760371 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760384 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.760418 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760427 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760623 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-api" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.760644 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fc26b9-373e-446b-8345-eae2740aac66" containerName="neutron-httpd" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.761226 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.777691 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.777691 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.778142 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mqxdc" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.787434 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.874570 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.875159 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.875296 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.875419 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976862 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976931 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.976983 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.977975 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.983712 4760 projected.go:194] Error preparing data for projected volume kube-api-access-bqssg for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.983808 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg podName:7bab96b5-22e6-465e-997f-451c6f98f712 nodeName:}" failed. No retries permitted until 2026-01-21 16:07:29.483781993 +0000 UTC m=+1220.151551571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bqssg" (UniqueName: "kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg") pod "openstackclient" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.984288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.986517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.982305 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:28 crc kubenswrapper[4760]: E0121 16:07:28.988865 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-bqssg], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="7bab96b5-22e6-465e-997f-451c6f98f712" Jan 21 16:07:28 crc kubenswrapper[4760]: I0121 16:07:28.992464 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.079922 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.081211 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.091175 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181452 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181645 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjsds\" (UniqueName: \"kubernetes.io/projected/8e6f14c6-f759-439a-9ea1-63a88e650f89-kube-api-access-bjsds\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.181747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284213 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284371 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjsds\" (UniqueName: \"kubernetes.io/projected/8e6f14c6-f759-439a-9ea1-63a88e650f89-kube-api-access-bjsds\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284398 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.284481 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.286210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.290875 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.294312 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e6f14c6-f759-439a-9ea1-63a88e650f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.314050 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjsds\" (UniqueName: \"kubernetes.io/projected/8e6f14c6-f759-439a-9ea1-63a88e650f89-kube-api-access-bjsds\") pod \"openstackclient\" (UID: \"8e6f14c6-f759-439a-9ea1-63a88e650f89\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.434648 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.491693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") pod \"openstackclient\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: E0121 16:07:29.494604 4760 projected.go:194] Error preparing data for projected volume kube-api-access-bqssg for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7bab96b5-22e6-465e-997f-451c6f98f712) does not match the UID in record. The object might have been deleted and then recreated Jan 21 16:07:29 crc kubenswrapper[4760]: E0121 16:07:29.494683 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg podName:7bab96b5-22e6-465e-997f-451c6f98f712 nodeName:}" failed. No retries permitted until 2026-01-21 16:07:30.494662853 +0000 UTC m=+1221.162432431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bqssg" (UniqueName: "kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg") pod "openstackclient" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (7bab96b5-22e6-465e-997f-451c6f98f712) does not match the UID in record. The object might have been deleted and then recreated Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.643247 4760 generic.go:334] "Generic (PLEG): container finished" podID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerID="bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30" exitCode=0 Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.643589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.643593 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerDied","Data":"bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30"} Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.681578 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.686696 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7bab96b5-22e6-465e-997f-451c6f98f712" podUID="8e6f14c6-f759-439a-9ea1-63a88e650f89" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.801347 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") pod \"7bab96b5-22e6-465e-997f-451c6f98f712\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.801688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") pod \"7bab96b5-22e6-465e-997f-451c6f98f712\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.801722 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") pod \"7bab96b5-22e6-465e-997f-451c6f98f712\" (UID: \"7bab96b5-22e6-465e-997f-451c6f98f712\") " Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.802297 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqssg\" (UniqueName: \"kubernetes.io/projected/7bab96b5-22e6-465e-997f-451c6f98f712-kube-api-access-bqssg\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.803516 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7bab96b5-22e6-465e-997f-451c6f98f712" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.819917 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7bab96b5-22e6-465e-997f-451c6f98f712" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.832490 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bab96b5-22e6-465e-997f-451c6f98f712" (UID: "7bab96b5-22e6-465e-997f-451c6f98f712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.905603 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.905635 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.905646 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7bab96b5-22e6-465e-997f-451c6f98f712-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.970436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.993370 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 16:07:29 crc kubenswrapper[4760]: I0121 16:07:29.998712 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.680896 4760 generic.go:334] "Generic (PLEG): container finished" podID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerID="ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef" exitCode=0 Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.681261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerDied","Data":"ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef"} Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.685266 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.687403 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e6f14c6-f759-439a-9ea1-63a88e650f89","Type":"ContainerStarted","Data":"c544e1ee866f96acd5f187d2265fcc39dcc42696ec26ba88e6fbb91df7fe1bf5"} Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.703739 4760 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7bab96b5-22e6-465e-997f-451c6f98f712" podUID="8e6f14c6-f759-439a-9ea1-63a88e650f89" Jan 21 16:07:30 crc kubenswrapper[4760]: I0121 16:07:30.976127 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138724 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138799 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138930 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.138958 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.139176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.139353 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") pod \"29ee8909-527a-4a4d-a04c-9a401c551a6d\" (UID: \"29ee8909-527a-4a4d-a04c-9a401c551a6d\") " Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.140741 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.163984 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.164161 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts" (OuterVolumeSpecName: "scripts") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.165599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq" (OuterVolumeSpecName: "kube-api-access-jt7nq") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "kube-api-access-jt7nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.219672 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257573 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt7nq\" (UniqueName: \"kubernetes.io/projected/29ee8909-527a-4a4d-a04c-9a401c551a6d-kube-api-access-jt7nq\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257621 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257635 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.257646 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29ee8909-527a-4a4d-a04c-9a401c551a6d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.283521 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.361718 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.388702 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data" (OuterVolumeSpecName: "config-data") pod "29ee8909-527a-4a4d-a04c-9a401c551a6d" (UID: "29ee8909-527a-4a4d-a04c-9a401c551a6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.473630 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ee8909-527a-4a4d-a04c-9a401c551a6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.653854 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bab96b5-22e6-465e-997f-451c6f98f712" path="/var/lib/kubelet/pods/7bab96b5-22e6-465e-997f-451c6f98f712/volumes" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.671999 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.672061 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.757042 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761022 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29ee8909-527a-4a4d-a04c-9a401c551a6d","Type":"ContainerDied","Data":"4301c85466bd137aebdd84f3be156e4ebde225e6244ff05bfc740b55d6bef9d0"} Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761076 4760 scope.go:117] "RemoveContainer" containerID="bb8253f388e54603717981cb9a370ac6603e7c88b6d0c9f290b91d12efae7c30" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.761835 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.835566 4760 scope.go:117] "RemoveContainer" containerID="ae2627308f1dd57a0a1c4e77c80df7f39d4dc96a93037c77d1e801ac098a23ef" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.886400 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.895561 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.914287 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:31 crc kubenswrapper[4760]: E0121 16:07:31.929774 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.929810 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" Jan 21 16:07:31 crc kubenswrapper[4760]: E0121 16:07:31.929822 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.929830 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.929998 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="cinder-scheduler" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.930020 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" containerName="probe" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.931012 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.942755 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.943413 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:07:31 crc kubenswrapper[4760]: I0121 16:07:31.987427 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018012 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25lf4\" (UniqueName: \"kubernetes.io/projected/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-kube-api-access-25lf4\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018414 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018515 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018589 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.018725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.120654 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121046 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121129 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121193 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121291 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121366 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25lf4\" (UniqueName: \"kubernetes.io/projected/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-kube-api-access-25lf4\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.121399 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.129986 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.130461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-scripts\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.130589 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.148478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.149978 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25lf4\" (UniqueName: \"kubernetes.io/projected/98fdd45e-ce0f-464e-9ac9-a61c03e0eea5-kube-api-access-25lf4\") pod \"cinder-scheduler-0\" (UID: \"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5\") " pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.222058 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5c89c5dbb6-sspr9" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.264661 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.342871 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.343181 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" containerID="cri-o://3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278" gracePeriod=30 Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.343308 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" containerID="cri-o://9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b" gracePeriod=30 Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.816469 4760 generic.go:334] "Generic (PLEG): container finished" podID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerID="3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278" exitCode=143 Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.817072 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerDied","Data":"3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278"} Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.817706 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.889731 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.951426 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:07:32 crc kubenswrapper[4760]: I0121 16:07:32.952107 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.093829 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.094368 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.663123 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ee8909-527a-4a4d-a04c-9a401c551a6d" path="/var/lib/kubelet/pods/29ee8909-527a-4a4d-a04c-9a401c551a6d/volumes" Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.837127 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5","Type":"ContainerStarted","Data":"af019c9ad1d0a24bdad9c27c087a35c654cebc8634055e282ca5e43d4c3de3ac"} Jan 21 16:07:33 crc kubenswrapper[4760]: I0121 16:07:33.837221 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.222187 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.222849 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.284842 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.289639 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.858725 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.858759 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.859648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5","Type":"ContainerStarted","Data":"2563b50d26162127e7bfd7763e441b5c0aa99ffa1d00b1248392c8e01abccab5"} Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.860452 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:34 crc kubenswrapper[4760]: I0121 16:07:34.860473 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.841516 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:43348->10.217.0.157:9311: read: connection reset by peer" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.841592 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5df994f884-hfwfn" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:43342->10.217.0.157:9311: read: connection reset by peer" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.893670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98fdd45e-ce0f-464e-9ac9-a61c03e0eea5","Type":"ContainerStarted","Data":"531d0be5bfb9ec449267c48e9c2b5d8c9cb90c3fb0a02154d7aef65205e543e7"} Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.899786 4760 generic.go:334] "Generic (PLEG): container finished" podID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerID="9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b" exitCode=0 Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.900476 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerDied","Data":"9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b"} Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.936110 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.936087984 podStartE2EDuration="4.936087984s" podCreationTimestamp="2026-01-21 16:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:35.924850476 +0000 UTC m=+1226.592620054" watchObservedRunningTime="2026-01-21 16:07:35.936087984 +0000 UTC m=+1226.603857562" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.962229 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.962528 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:35 crc kubenswrapper[4760]: I0121 16:07:35.963144 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.606479 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.684856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.684969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.685009 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.685121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.685166 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") pod \"4383f2f1-00d7-4c21-905a-944cd4f852fc\" (UID: \"4383f2f1-00d7-4c21-905a-944cd4f852fc\") " Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.688004 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs" (OuterVolumeSpecName: "logs") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.699092 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.742628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw" (OuterVolumeSpecName: "kube-api-access-dfcpw") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "kube-api-access-dfcpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.779612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792529 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792574 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcpw\" (UniqueName: \"kubernetes.io/projected/4383f2f1-00d7-4c21-905a-944cd4f852fc-kube-api-access-dfcpw\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792589 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.792611 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4383f2f1-00d7-4c21-905a-944cd4f852fc-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.864613 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data" (OuterVolumeSpecName: "config-data") pod "4383f2f1-00d7-4c21-905a-944cd4f852fc" (UID: "4383f2f1-00d7-4c21-905a-944cd4f852fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.894722 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383f2f1-00d7-4c21-905a-944cd4f852fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.953793 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5df994f884-hfwfn" event={"ID":"4383f2f1-00d7-4c21-905a-944cd4f852fc","Type":"ContainerDied","Data":"8bec3b20d7b1705e8c8c30a5ecc62a2a295136a16c3bef117e56b2501c5643a3"} Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.953866 4760 scope.go:117] "RemoveContainer" containerID="9ab8f2da6be3b8cb3c830392620ab6c9ec70000de151fbfa039c844d7dd5b01b" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.954146 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5df994f884-hfwfn" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.954245 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:36 crc kubenswrapper[4760]: I0121 16:07:36.954348 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.024382 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.029370 4760 scope.go:117] "RemoveContainer" containerID="3e7cedae06c543fbef38a7847d35719cdd0c7f753de3bd32a6d076c460380278" Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.034016 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5df994f884-hfwfn"] Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.266162 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 16:07:37 crc kubenswrapper[4760]: I0121 16:07:37.639993 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" path="/var/lib/kubelet/pods/4383f2f1-00d7-4c21-905a-944cd4f852fc/volumes" Jan 21 16:07:38 crc kubenswrapper[4760]: I0121 16:07:38.304045 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:38 crc kubenswrapper[4760]: I0121 16:07:38.304516 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.438015 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.646373 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7c9f777647-hfk58"] Jan 21 16:07:39 crc kubenswrapper[4760]: E0121 16:07:39.647207 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647236 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" Jan 21 16:07:39 crc kubenswrapper[4760]: E0121 16:07:39.647305 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647317 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647588 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api-log" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.647638 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383f2f1-00d7-4c21-905a-944cd4f852fc" containerName="barbican-api" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.648840 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.653118 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.653639 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.653754 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.673496 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c9f777647-hfk58"] Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.816219 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-internal-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817152 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-combined-ca-bundle\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-etc-swift\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817254 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-log-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.817353 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-config-data\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.818173 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-run-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.818200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnpv\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-kube-api-access-fnnpv\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.818218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-public-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919427 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-run-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919782 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-public-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919866 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnpv\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-kube-api-access-fnnpv\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.919953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-internal-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-combined-ca-bundle\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920129 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-etc-swift\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920216 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-log-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920353 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-config-data\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-log-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.920240 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-run-httpd\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.933145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-combined-ca-bundle\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.933370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-internal-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.934026 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-config-data\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.934544 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-public-tls-certs\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.935260 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-etc-swift\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:39 crc kubenswrapper[4760]: I0121 16:07:39.937646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnpv\" (UniqueName: \"kubernetes.io/projected/92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c-kube-api-access-fnnpv\") pod \"swift-proxy-7c9f777647-hfk58\" (UID: \"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c\") " pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:40 crc kubenswrapper[4760]: I0121 16:07:40.012845 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:40 crc kubenswrapper[4760]: I0121 16:07:40.365568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:07:40 crc kubenswrapper[4760]: I0121 16:07:40.699689 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7c9f777647-hfk58"] Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.021369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c9f777647-hfk58" event={"ID":"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c","Type":"ContainerStarted","Data":"10bd7dbea2dce782e2cd19eeb59d2f4121918402859f6aa6966bec41415584d6"} Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.021749 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c9f777647-hfk58" event={"ID":"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c","Type":"ContainerStarted","Data":"23aeacc85e381daa41e9d79d5e3cc5306507712c158cd3e3e9a63d14691a6517"} Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.615548 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.615919 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" containerID="cri-o://f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539" gracePeriod=30 Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.616007 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" containerID="cri-o://deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767" gracePeriod=30 Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.616038 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" containerID="cri-o://9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f" gracePeriod=30 Jan 21 16:07:41 crc kubenswrapper[4760]: I0121 16:07:41.616885 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" containerID="cri-o://ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1" gracePeriod=30 Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.194828 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1" exitCode=0 Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.195192 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767" exitCode=2 Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.194909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1"} Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.195270 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767"} Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.214670 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7c9f777647-hfk58" event={"ID":"92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c","Type":"ContainerStarted","Data":"47e42ac93f9c89b2395fae79c50ace671c28f60042706cbf311973240ba077ab"} Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.215493 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.215576 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.251952 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7c9f777647-hfk58" podStartSLOduration=3.251931462 podStartE2EDuration="3.251931462s" podCreationTimestamp="2026-01-21 16:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:42.250675621 +0000 UTC m=+1232.918445209" watchObservedRunningTime="2026-01-21 16:07:42.251931462 +0000 UTC m=+1232.919701040" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.587180 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 16:07:42 crc kubenswrapper[4760]: I0121 16:07:42.954190 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.094231 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.205876 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.207191 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" containerID="cri-o://fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd" gracePeriod=30 Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.207419 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" containerID="cri-o://050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f" gracePeriod=30 Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.240744 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539" exitCode=0 Jan 21 16:07:43 crc kubenswrapper[4760]: I0121 16:07:43.242229 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539"} Jan 21 16:07:44 crc kubenswrapper[4760]: I0121 16:07:44.277397 4760 generic.go:334] "Generic (PLEG): container finished" podID="53329917-a467-4919-b5ad-170f6fa50655" containerID="fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd" exitCode=143 Jan 21 16:07:44 crc kubenswrapper[4760]: I0121 16:07:44.277491 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerDied","Data":"fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd"} Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.044442 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.467187 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.467861 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" containerID="cri-o://42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1" gracePeriod=30 Jan 21 16:07:45 crc kubenswrapper[4760]: I0121 16:07:45.468429 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" containerID="cri-o://ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2" gracePeriod=30 Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.301652 4760 generic.go:334] "Generic (PLEG): container finished" podID="cae71bb0-4c04-47db-a201-a172da79df7f" containerID="9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f" exitCode=0 Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.301747 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f"} Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.307996 4760 generic.go:334] "Generic (PLEG): container finished" podID="8f0945b1-00a1-4723-8047-b44cee375d10" containerID="42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1" exitCode=143 Jan 21 16:07:46 crc kubenswrapper[4760]: I0121 16:07:46.308058 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerDied","Data":"42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1"} Jan 21 16:07:47 crc kubenswrapper[4760]: I0121 16:07:47.332219 4760 generic.go:334] "Generic (PLEG): container finished" podID="53329917-a467-4919-b5ad-170f6fa50655" containerID="050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f" exitCode=0 Jan 21 16:07:47 crc kubenswrapper[4760]: I0121 16:07:47.332694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerDied","Data":"050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f"} Jan 21 16:07:49 crc kubenswrapper[4760]: I0121 16:07:49.352856 4760 generic.go:334] "Generic (PLEG): container finished" podID="8f0945b1-00a1-4723-8047-b44cee375d10" containerID="ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2" exitCode=0 Jan 21 16:07:49 crc kubenswrapper[4760]: I0121 16:07:49.352946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerDied","Data":"ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2"} Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.021025 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7c9f777647-hfk58" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.426414 4760 generic.go:334] "Generic (PLEG): container finished" podID="e76b744a-9845-4295-80c1-eb276462b45f" containerID="9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e" exitCode=137 Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.427381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerDied","Data":"9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e"} Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.769175 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874006 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874146 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874251 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874312 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.874452 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") pod \"53329917-a467-4919-b5ad-170f6fa50655\" (UID: \"53329917-a467-4919-b5ad-170f6fa50655\") " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.875507 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs" (OuterVolumeSpecName: "logs") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.890001 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.892479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.893201 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts" (OuterVolumeSpecName: "scripts") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.893461 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh" (OuterVolumeSpecName: "kube-api-access-7wcsh") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "kube-api-access-7wcsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.950082 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.950156 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979385 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979432 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wcsh\" (UniqueName: \"kubernetes.io/projected/53329917-a467-4919-b5ad-170f6fa50655-kube-api-access-7wcsh\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979444 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979453 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53329917-a467-4919-b5ad-170f6fa50655-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.979495 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.982210 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:50 crc kubenswrapper[4760]: I0121 16:07:50.988470 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data" (OuterVolumeSpecName: "config-data") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.005540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "53329917-a467-4919-b5ad-170f6fa50655" (UID: "53329917-a467-4919-b5ad-170f6fa50655"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.018774 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.023036 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.031656 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.069161 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081930 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081964 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081975 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53329917-a467-4919-b5ad-170f6fa50655-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.081986 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.182867 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183204 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183303 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183426 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183529 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183617 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183710 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183810 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.183969 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184196 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184290 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184413 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184510 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185407 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185589 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185687 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.185789 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.186167 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.187179 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") pod \"e76b744a-9845-4295-80c1-eb276462b45f\" (UID: \"e76b744a-9845-4295-80c1-eb276462b45f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.187385 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") pod \"cae71bb0-4c04-47db-a201-a172da79df7f\" (UID: \"cae71bb0-4c04-47db-a201-a172da79df7f\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.187495 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") pod \"8f0945b1-00a1-4723-8047-b44cee375d10\" (UID: \"8f0945b1-00a1-4723-8047-b44cee375d10\") " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.184638 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.186150 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.188987 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs" (OuterVolumeSpecName: "logs") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.189684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.189861 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs" (OuterVolumeSpecName: "logs") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.191545 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv" (OuterVolumeSpecName: "kube-api-access-wlhzv") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "kube-api-access-wlhzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.193054 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.195950 4760 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e76b744a-9845-4295-80c1-eb276462b45f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196046 4760 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196137 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhzv\" (UniqueName: \"kubernetes.io/projected/cae71bb0-4c04-47db-a201-a172da79df7f-kube-api-access-wlhzv\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196224 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e76b744a-9845-4295-80c1-eb276462b45f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.196301 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f0945b1-00a1-4723-8047-b44cee375d10-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.201259 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.205501 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.205505 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv" (OuterVolumeSpecName: "kube-api-access-xb2nv") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "kube-api-access-xb2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.205804 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.209114 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts" (OuterVolumeSpecName: "scripts") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.211644 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts" (OuterVolumeSpecName: "scripts") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.221367 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4" (OuterVolumeSpecName: "kube-api-access-j8hx4") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "kube-api-access-j8hx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.228611 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts" (OuterVolumeSpecName: "scripts") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298540 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298577 4760 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298586 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cae71bb0-4c04-47db-a201-a172da79df7f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298597 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8hx4\" (UniqueName: \"kubernetes.io/projected/8f0945b1-00a1-4723-8047-b44cee375d10-kube-api-access-j8hx4\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298607 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb2nv\" (UniqueName: \"kubernetes.io/projected/e76b744a-9845-4295-80c1-eb276462b45f-kube-api-access-xb2nv\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298634 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298643 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.298652 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.311629 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.368344 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.368899 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.401118 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.401161 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.401192 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.402641 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data" (OuterVolumeSpecName: "config-data") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.414576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data" (OuterVolumeSpecName: "config-data") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.418921 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f0945b1-00a1-4723-8047-b44cee375d10" (UID: "8f0945b1-00a1-4723-8047-b44cee375d10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.424694 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e76b744a-9845-4295-80c1-eb276462b45f" (UID: "e76b744a-9845-4295-80c1-eb276462b45f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.431979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.460869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cae71bb0-4c04-47db-a201-a172da79df7f","Type":"ContainerDied","Data":"c187b0e1bc1251c0fc77ae55d688bcb6a9fca7de4415e1e7807cacb325d574a7"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.460943 4760 scope.go:117] "RemoveContainer" containerID="ca33b1a56688fae210f2bb0cdd0b0e70f6be2997add14258647b51bc6c275be1" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.461140 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.467736 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8e6f14c6-f759-439a-9ea1-63a88e650f89","Type":"ContainerStarted","Data":"21f9e238cdac9da3022cd1a75894126942fb097baaf3452eeb708b84b2249791"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.475880 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data" (OuterVolumeSpecName: "config-data") pod "cae71bb0-4c04-47db-a201-a172da79df7f" (UID: "cae71bb0-4c04-47db-a201-a172da79df7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.494479 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e76b744a-9845-4295-80c1-eb276462b45f","Type":"ContainerDied","Data":"3dff80f153f5d28aeb9e2505196ffd6323fab10fa0cb62489e1b414aceebdd97"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.494602 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.503986 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504017 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b744a-9845-4295-80c1-eb276462b45f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504026 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504034 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504042 4760 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f0945b1-00a1-4723-8047-b44cee375d10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.504053 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cae71bb0-4c04-47db-a201-a172da79df7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.509516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8f0945b1-00a1-4723-8047-b44cee375d10","Type":"ContainerDied","Data":"7a1e65b81fdc2bf2cd5f8d28a919b3bf01a637dbd30db0eacf7ef6d62e351489"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.509630 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.511727 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.044130489 podStartE2EDuration="22.511709679s" podCreationTimestamp="2026-01-21 16:07:29 +0000 UTC" firstStartedPulling="2026-01-21 16:07:29.998415162 +0000 UTC m=+1220.666184740" lastFinishedPulling="2026-01-21 16:07:50.465994352 +0000 UTC m=+1241.133763930" observedRunningTime="2026-01-21 16:07:51.488995931 +0000 UTC m=+1242.156765519" watchObservedRunningTime="2026-01-21 16:07:51.511709679 +0000 UTC m=+1242.179479257" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.515690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"53329917-a467-4919-b5ad-170f6fa50655","Type":"ContainerDied","Data":"0c1f5aaaeb7e600775b8e20b4db3b293f673d5b7306c3b455790f8f07b2cbe9c"} Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.516241 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.556495 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.571861 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.584283 4760 scope.go:117] "RemoveContainer" containerID="deb4a03bcdb098135ada43846c6b9ac2431c7af97a67335e5c3d752605125767" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.608442 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.617805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618250 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618275 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618295 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618306 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618316 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618326 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618340 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618348 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618373 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618380 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618390 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618397 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618418 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618425 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618444 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618453 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618466 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618473 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: E0121 16:07:51.618488 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618496 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618681 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618700 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-central-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618720 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618733 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618747 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618758 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="proxy-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618772 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" containerName="glance-log" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618783 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="sg-core" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618794 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" containerName="ceilometer-notification-agent" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.618805 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="53329917-a467-4919-b5ad-170f6fa50655" containerName="glance-httpd" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.620028 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.627828 4760 scope.go:117] "RemoveContainer" containerID="9fd93af6e0533d1631dd037511cefd785d35dd3a23e69afcd53595504737104f" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.628326 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2lr4r" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.628601 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.628719 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.629046 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.659299 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0945b1-00a1-4723-8047-b44cee375d10" path="/var/lib/kubelet/pods/8f0945b1-00a1-4723-8047-b44cee375d10/volumes" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.660577 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.674783 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.702510 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.704017 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.709039 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.709272 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.709499 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.712576 4760 scope.go:117] "RemoveContainer" containerID="f0d98c1476b4ecf1bf06d6ea80af30154414e1a1beeaa39901bfdfed920e1539" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.712819 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.725984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.744427 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.746205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.753950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.754635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.758775 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.780556 4760 scope.go:117] "RemoveContainer" containerID="9cc80e3ae9c6ec3d04a4999ad52486b1418f261b21c407378d5c983416a30d7e" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.788482 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809259 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data-custom\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809363 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-scripts\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nkw\" (UniqueName: \"kubernetes.io/projected/5d78a94b-d39f-4654-936e-8a39369b2082-kube-api-access-c5nkw\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809489 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809552 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-logs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809642 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809686 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809727 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809811 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809863 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809884 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809915 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76dp\" (UniqueName: \"kubernetes.io/projected/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-kube-api-access-w76dp\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.809963 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.856410 4760 scope.go:117] "RemoveContainer" containerID="d2470a094e14d7da50a9f1d3b50efda0fed69eae60738b00a6c247bd23c71ac1" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.864430 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.870296 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.910967 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914024 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-logs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914649 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914677 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76dp\" (UniqueName: \"kubernetes.io/projected/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-kube-api-access-w76dp\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.914765 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.915001 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.916823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-logs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.916981 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.918867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919012 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-scripts\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919080 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data-custom\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919146 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-config-data\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nkw\" (UniqueName: \"kubernetes.io/projected/5d78a94b-d39f-4654-936e-8a39369b2082-kube-api-access-c5nkw\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919239 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919374 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9fj\" (UniqueName: \"kubernetes.io/projected/468d7d17-9181-4f39-851d-3acff337e10c-kube-api-access-wq9fj\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.919964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.920776 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.923518 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-scripts\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924337 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-logs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924926 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-logs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.924998 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.925023 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.925053 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.925085 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.926888 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5d78a94b-d39f-4654-936e-8a39369b2082-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.929160 4760 scope.go:117] "RemoveContainer" containerID="ddb3c1768bc7195398095eaeb0ab7d4403d50e3ebdea9c8b9ec55dcb7836fac2" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.929505 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.929649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-scripts\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.937261 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.937885 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.937896 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.938235 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.939806 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.944057 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76dp\" (UniqueName: \"kubernetes.io/projected/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-kube-api-access-w76dp\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948143 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nkw\" (UniqueName: \"kubernetes.io/projected/5d78a94b-d39f-4654-936e-8a39369b2082-kube-api-access-c5nkw\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948581 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.948974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-config-data-custom\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.962675 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d78a94b-d39f-4654-936e-8a39369b2082-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.964520 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f57ee425-0d4d-41f7-bf99-4ab4e87ead78-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f57ee425-0d4d-41f7-bf99-4ab4e87ead78\") " pod="openstack/cinder-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.976455 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"5d78a94b-d39f-4654-936e-8a39369b2082\") " pod="openstack/glance-default-internal-api-0" Jan 21 16:07:51 crc kubenswrapper[4760]: I0121 16:07:51.978718 4760 scope.go:117] "RemoveContainer" containerID="42db8ff90470066bc138cdc1856890c087031809113b3c8bd808bb47f070b4c1" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.005076 4760 scope.go:117] "RemoveContainer" containerID="050d2576ad25bdb1d46b6e2ab6c5a3cab9f4800dbe0a57b22066038844f5e73f" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028489 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028558 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-config-data\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9fj\" (UniqueName: \"kubernetes.io/projected/468d7d17-9181-4f39-851d-3acff337e10c-kube-api-access-wq9fj\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028758 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028788 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-scripts\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028808 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028869 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028903 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028932 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028958 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.028985 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-logs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.029509 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-logs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.029746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/468d7d17-9181-4f39-851d-3acff337e10c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.031149 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.035168 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.038054 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.049747 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-config-data\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.051532 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.052364 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/468d7d17-9181-4f39-851d-3acff337e10c-scripts\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.056369 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9fj\" (UniqueName: \"kubernetes.io/projected/468d7d17-9181-4f39-851d-3acff337e10c-kube-api-access-wq9fj\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.056728 4760 scope.go:117] "RemoveContainer" containerID="fc68b09c45055ed6d0b269a44950b81485cd71d02e0eb716a5258b78cd1762cd" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.090279 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"468d7d17-9181-4f39-851d-3acff337e10c\") " pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134220 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134323 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134374 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134396 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134429 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134457 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.134544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.135158 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.136447 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.139936 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.140392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.141149 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.141445 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.164015 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"ceilometer-0\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.251412 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.265415 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.378637 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.744197 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 16:07:52 crc kubenswrapper[4760]: I0121 16:07:52.953665 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.035165 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.094350 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c9896dc76-gwrzv" podUID="0e7e96ce-a64f-4a21-97e1-b2ebabc7e236" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.227972 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.240331 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.299153 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.301572 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.339801 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.410750 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.414828 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.424986 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505885 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505957 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.505991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.510745 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.512230 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.514818 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.533920 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.595088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f57ee425-0d4d-41f7-bf99-4ab4e87ead78","Type":"ContainerStarted","Data":"589a4a0ec6b81eaae85c5c0d0a0d60e0b90da8faabc8a454e3541a44d65a9c4d"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.601626 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"225acde00eba57325c36a89c4f4f0390a34af1eb12a6c60f17cf37065932b7aa"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608343 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608473 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608518 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.608646 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.609553 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.619827 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.644462 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"nova-cell0-db-create-wgjbm\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.658945 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"nova-api-db-create-5lmzc\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.686709 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53329917-a467-4919-b5ad-170f6fa50655" path="/var/lib/kubelet/pods/53329917-a467-4919-b5ad-170f6fa50655/volumes" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.688154 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae71bb0-4c04-47db-a201-a172da79df7f" path="/var/lib/kubelet/pods/cae71bb0-4c04-47db-a201-a172da79df7f/volumes" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.691504 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e76b744a-9845-4295-80c1-eb276462b45f" path="/var/lib/kubelet/pods/e76b744a-9845-4295-80c1-eb276462b45f/volumes" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.692497 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d78a94b-d39f-4654-936e-8a39369b2082","Type":"ContainerStarted","Data":"c42ecd08120d82da09c367436ac0d838e549f14ef4b1ee72069211c882f9586b"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.692535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.695097 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.695216 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.700930 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.704798 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"468d7d17-9181-4f39-851d-3acff337e10c","Type":"ContainerStarted","Data":"5ef2e0c729b2fe50c06a38752eb36ae2aa4feab6aa7b83f83ce82a370c9095c7"} Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.726767 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.728229 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" containerID="cri-o://343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" gracePeriod=30 Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.732650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.732762 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.758621 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.805233 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.812182 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.822302 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.827085 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.840896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841237 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841434 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841596 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.841823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.842022 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.843161 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.865394 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"nova-api-2ffe-account-create-update-sq7hp\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.867531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.925081 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": dial tcp 10.217.0.104:8081: connect: connection refused" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.926208 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.927814 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.933663 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.937830 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946257 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946471 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946536 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.946595 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.947276 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.947420 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.948449 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.951481 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.972950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"nova-cell0-1b63-account-create-update-s5scn\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:53 crc kubenswrapper[4760]: I0121 16:07:53.980634 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"nova-cell1-db-create-s6bgh\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.048434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.048865 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.049858 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.071201 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"nova-cell1-49b7-account-create-update-cbnck\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.091308 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.197010 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.262339 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.585182 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.663211 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.684893 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.780201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5lmzc" event={"ID":"83ff3135-0e1c-46b4-a3a2-5520a7d505da","Type":"ContainerStarted","Data":"af89809b3911aa88a0970a4851d32f92d6e0ac9cd6710a0ddafdd2a2edc4fdd2"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785623 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd82db1d-e956-477b-99af-024e7e0a6170" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" exitCode=2 Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerDied","Data":"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd82db1d-e956-477b-99af-024e7e0a6170","Type":"ContainerDied","Data":"c21e61b8d5ddc6d0cf0e89f35035f43ac3b2d33ed78630f8fcb288cfbfd9d358"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785785 4760 scope.go:117] "RemoveContainer" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.785960 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.793109 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjbm" event={"ID":"7d5e4041-ff0a-416e-b541-480b17fcc32e","Type":"ContainerStarted","Data":"7909dc2338a0955b0a26b07bbcf2f19a1868860cc1f24db750be23fdf6f00a51"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.821651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f57ee425-0d4d-41f7-bf99-4ab4e87ead78","Type":"ContainerStarted","Data":"93365c8ee1690c0e8b1d35df935ac128bf577f9c08c5ecab1d178c311180d7f0"} Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.852499 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.862162 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.870177 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") pod \"fd82db1d-e956-477b-99af-024e7e0a6170\" (UID: \"fd82db1d-e956-477b-99af-024e7e0a6170\") " Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.872547 4760 scope.go:117] "RemoveContainer" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" Jan 21 16:07:54 crc kubenswrapper[4760]: E0121 16:07:54.873106 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c\": container with ID starting with 343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c not found: ID does not exist" containerID="343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.873148 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c"} err="failed to get container status \"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c\": rpc error: code = NotFound desc = could not find container \"343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c\": container with ID starting with 343c16d56ef76b6c8f94de47a03323d6e9983c8996c93947515e45994c60af2c not found: ID does not exist" Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.895406 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz" (OuterVolumeSpecName: "kube-api-access-gmdrz") pod "fd82db1d-e956-477b-99af-024e7e0a6170" (UID: "fd82db1d-e956-477b-99af-024e7e0a6170"). InnerVolumeSpecName "kube-api-access-gmdrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:54 crc kubenswrapper[4760]: W0121 16:07:54.927035 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95551b69_b405_4008_b600_7010cea057a2.slice/crio-c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69 WatchSource:0}: Error finding container c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69: Status 404 returned error can't find the container with id c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69 Jan 21 16:07:54 crc kubenswrapper[4760]: I0121 16:07:54.974589 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmdrz\" (UniqueName: \"kubernetes.io/projected/fd82db1d-e956-477b-99af-024e7e0a6170-kube-api-access-gmdrz\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.156051 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.173122 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.195511 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.206581 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: E0121 16:07:55.206985 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.207001 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.207207 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" containerName="kube-state-metrics" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.208030 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.211307 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.211645 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 16:07:55 crc kubenswrapper[4760]: W0121 16:07:55.216704 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11004437_56c2_4e20_911b_e31d6726fabc.slice/crio-2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79 WatchSource:0}: Error finding container 2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79: Status 404 returned error can't find the container with id 2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79 Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.226531 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.238026 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:55 crc kubenswrapper[4760]: W0121 16:07:55.274352 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2bb047a_4a1f_4617_8d7a_66f80c84ea4a.slice/crio-3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69 WatchSource:0}: Error finding container 3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69: Status 404 returned error can't find the container with id 3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69 Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387218 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387279 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqb4\" (UniqueName: \"kubernetes.io/projected/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-api-access-ttqb4\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.387548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.491557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.491660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.491770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqb4\" (UniqueName: \"kubernetes.io/projected/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-api-access-ttqb4\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.492100 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.500655 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.506466 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.511809 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0d87473-0ca7-46b5-a57f-611e3014ab77-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.513596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqb4\" (UniqueName: \"kubernetes.io/projected/f0d87473-0ca7-46b5-a57f-611e3014ab77-kube-api-access-ttqb4\") pod \"kube-state-metrics-0\" (UID: \"f0d87473-0ca7-46b5-a57f-611e3014ab77\") " pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.641365 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.649394 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd82db1d-e956-477b-99af-024e7e0a6170" path="/var/lib/kubelet/pods/fd82db1d-e956-477b-99af-024e7e0a6170/volumes" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.890954 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e76b744a-9845-4295-80c1-eb276462b45f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.161:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.929874 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.942989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d78a94b-d39f-4654-936e-8a39369b2082","Type":"ContainerStarted","Data":"b7be8b0d5ab697f787ac322b9453adfff8ebc659b92ca1fde7ac74b136ee0cbe"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.976711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerStarted","Data":"d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.976768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerStarted","Data":"3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.993871 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerStarted","Data":"bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4"} Jan 21 16:07:55 crc kubenswrapper[4760]: I0121 16:07:55.993977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerStarted","Data":"c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.004936 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" podStartSLOduration=3.004913597 podStartE2EDuration="3.004913597s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.003754409 +0000 UTC m=+1246.671523987" watchObservedRunningTime="2026-01-21 16:07:56.004913597 +0000 UTC m=+1246.672683175" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.005199 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"468d7d17-9181-4f39-851d-3acff337e10c","Type":"ContainerStarted","Data":"bfac0afe364bc179f3129e516d028263b0f8f9b28c9de00f95d7a200dd84431b"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.046466 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6bgh" event={"ID":"11004437-56c2-4e20-911b-e31d6726fabc","Type":"ContainerStarted","Data":"2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.052719 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" podStartSLOduration=3.05268467 podStartE2EDuration="3.05268467s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.025105603 +0000 UTC m=+1246.692875191" watchObservedRunningTime="2026-01-21 16:07:56.05268467 +0000 UTC m=+1246.720454248" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.061334 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerStarted","Data":"e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.061408 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerStarted","Data":"97d41ba11ae35b3681fbfee8bb2cd6ffa01f48c0fe603807474b9ea5dd261efc"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.076456 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-s6bgh" podStartSLOduration=3.076435353 podStartE2EDuration="3.076435353s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.069146014 +0000 UTC m=+1246.736915622" watchObservedRunningTime="2026-01-21 16:07:56.076435353 +0000 UTC m=+1246.744204931" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.077361 4760 generic.go:334] "Generic (PLEG): container finished" podID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerID="9efd7cd0b2508fe6b4994db87ba2ac3d840259ba581e4a0b9385f3fb037f31a4" exitCode=0 Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.077468 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5lmzc" event={"ID":"83ff3135-0e1c-46b4-a3a2-5520a7d505da","Type":"ContainerDied","Data":"9efd7cd0b2508fe6b4994db87ba2ac3d840259ba581e4a0b9385f3fb037f31a4"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.089521 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerID="aea9fe4bfa7bd096a20c98a988cc0352b5060d7074bdc9b9eacffe3c811bf1ca" exitCode=0 Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.089584 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjbm" event={"ID":"7d5e4041-ff0a-416e-b541-480b17fcc32e","Type":"ContainerDied","Data":"aea9fe4bfa7bd096a20c98a988cc0352b5060d7074bdc9b9eacffe3c811bf1ca"} Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.129683 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" podStartSLOduration=3.129653509 podStartE2EDuration="3.129653509s" podCreationTimestamp="2026-01-21 16:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:56.102598905 +0000 UTC m=+1246.770368493" watchObservedRunningTime="2026-01-21 16:07:56.129653509 +0000 UTC m=+1246.797423087" Jan 21 16:07:56 crc kubenswrapper[4760]: I0121 16:07:56.387881 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 16:07:56 crc kubenswrapper[4760]: W0121 16:07:56.413179 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0d87473_0ca7_46b5_a57f_611e3014ab77.slice/crio-2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d WatchSource:0}: Error finding container 2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d: Status 404 returned error can't find the container with id 2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.102807 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"468d7d17-9181-4f39-851d-3acff337e10c","Type":"ContainerStarted","Data":"1dfdc4ce1f9cddf05e8a4989a840b7df98a13f280aff232b363ea3712eca4fe4"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.105730 4760 generic.go:334] "Generic (PLEG): container finished" podID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerID="d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.105840 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerDied","Data":"d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.108133 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f57ee425-0d4d-41f7-bf99-4ab4e87ead78","Type":"ContainerStarted","Data":"627abbd5359450398bfd5a735211f093587ce00c4a1c9306079acdaac9feceb2"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.112209 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f0d87473-0ca7-46b5-a57f-611e3014ab77","Type":"ContainerStarted","Data":"2cc2cd7cc6482e9ae8771db0fb85209673af10e7fdab56a37ea889145cb8bf4d"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.117022 4760 generic.go:334] "Generic (PLEG): container finished" podID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerID="e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.117110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerDied","Data":"e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.120025 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.122424 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5d78a94b-d39f-4654-936e-8a39369b2082","Type":"ContainerStarted","Data":"a4b6f7a87daf79608622319768550cc05a4443abb289ac802e87eb345aa4ce80"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.124413 4760 generic.go:334] "Generic (PLEG): container finished" podID="11004437-56c2-4e20-911b-e31d6726fabc" containerID="ced70d1d4870a2dc28e44804244b887878eb363beffb85bfdf18c1407d5b7ab5" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.124519 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6bgh" event={"ID":"11004437-56c2-4e20-911b-e31d6726fabc","Type":"ContainerDied","Data":"ced70d1d4870a2dc28e44804244b887878eb363beffb85bfdf18c1407d5b7ab5"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.127289 4760 generic.go:334] "Generic (PLEG): container finished" podID="95551b69-b405-4008-b600-7010cea057a2" containerID="bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4" exitCode=0 Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.127391 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerDied","Data":"bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4"} Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.150458 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.150423755 podStartE2EDuration="6.150423755s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:57.134401082 +0000 UTC m=+1247.802170660" watchObservedRunningTime="2026-01-21 16:07:57.150423755 +0000 UTC m=+1247.818193333" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.167265 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.167237658 podStartE2EDuration="6.167237658s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:57.161268741 +0000 UTC m=+1247.829038319" watchObservedRunningTime="2026-01-21 16:07:57.167237658 +0000 UTC m=+1247.835007236" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.288134 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.288108445 podStartE2EDuration="6.288108445s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:07:57.281441091 +0000 UTC m=+1247.949210669" watchObservedRunningTime="2026-01-21 16:07:57.288108445 +0000 UTC m=+1247.955878023" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.400670 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.801830 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.813167 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.904213 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") pod \"7d5e4041-ff0a-416e-b541-480b17fcc32e\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.904498 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") pod \"7d5e4041-ff0a-416e-b541-480b17fcc32e\" (UID: \"7d5e4041-ff0a-416e-b541-480b17fcc32e\") " Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.905017 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d5e4041-ff0a-416e-b541-480b17fcc32e" (UID: "7d5e4041-ff0a-416e-b541-480b17fcc32e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:57 crc kubenswrapper[4760]: I0121 16:07:57.910525 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll" (OuterVolumeSpecName: "kube-api-access-9r2ll") pod "7d5e4041-ff0a-416e-b541-480b17fcc32e" (UID: "7d5e4041-ff0a-416e-b541-480b17fcc32e"). InnerVolumeSpecName "kube-api-access-9r2ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.006650 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") pod \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.006718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") pod \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\" (UID: \"83ff3135-0e1c-46b4-a3a2-5520a7d505da\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.007178 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d5e4041-ff0a-416e-b541-480b17fcc32e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.007196 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r2ll\" (UniqueName: \"kubernetes.io/projected/7d5e4041-ff0a-416e-b541-480b17fcc32e-kube-api-access-9r2ll\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.007261 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83ff3135-0e1c-46b4-a3a2-5520a7d505da" (UID: "83ff3135-0e1c-46b4-a3a2-5520a7d505da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.009828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb" (OuterVolumeSpecName: "kube-api-access-swnbb") pod "83ff3135-0e1c-46b4-a3a2-5520a7d505da" (UID: "83ff3135-0e1c-46b4-a3a2-5520a7d505da"). InnerVolumeSpecName "kube-api-access-swnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.109729 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83ff3135-0e1c-46b4-a3a2-5520a7d505da-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.109770 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swnbb\" (UniqueName: \"kubernetes.io/projected/83ff3135-0e1c-46b4-a3a2-5520a7d505da-kube-api-access-swnbb\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.165053 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.171802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5lmzc" event={"ID":"83ff3135-0e1c-46b4-a3a2-5520a7d505da","Type":"ContainerDied","Data":"af89809b3911aa88a0970a4851d32f92d6e0ac9cd6710a0ddafdd2a2edc4fdd2"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.171869 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af89809b3911aa88a0970a4851d32f92d6e0ac9cd6710a0ddafdd2a2edc4fdd2" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.171902 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5lmzc" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.183631 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-wgjbm" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.184588 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-wgjbm" event={"ID":"7d5e4041-ff0a-416e-b541-480b17fcc32e","Type":"ContainerDied","Data":"7909dc2338a0955b0a26b07bbcf2f19a1868860cc1f24db750be23fdf6f00a51"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.184675 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7909dc2338a0955b0a26b07bbcf2f19a1868860cc1f24db750be23fdf6f00a51" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.189536 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f0d87473-0ca7-46b5-a57f-611e3014ab77","Type":"ContainerStarted","Data":"6d4bb7d023d619c9f0a548a7fb9de2319e7ff4fef857cc638cef56a3f8dc52e4"} Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.189631 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.189661 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.226309 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7665279919999999 podStartE2EDuration="3.226270783s" podCreationTimestamp="2026-01-21 16:07:55 +0000 UTC" firstStartedPulling="2026-01-21 16:07:56.416283035 +0000 UTC m=+1247.084052613" lastFinishedPulling="2026-01-21 16:07:57.876025826 +0000 UTC m=+1248.543795404" observedRunningTime="2026-01-21 16:07:58.212210478 +0000 UTC m=+1248.879980056" watchObservedRunningTime="2026-01-21 16:07:58.226270783 +0000 UTC m=+1248.894040361" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.538751 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.629206 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") pod \"95551b69-b405-4008-b600-7010cea057a2\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.629276 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") pod \"95551b69-b405-4008-b600-7010cea057a2\" (UID: \"95551b69-b405-4008-b600-7010cea057a2\") " Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.630687 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95551b69-b405-4008-b600-7010cea057a2" (UID: "95551b69-b405-4008-b600-7010cea057a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.638682 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc" (OuterVolumeSpecName: "kube-api-access-t9pvc") pod "95551b69-b405-4008-b600-7010cea057a2" (UID: "95551b69-b405-4008-b600-7010cea057a2"). InnerVolumeSpecName "kube-api-access-t9pvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.735114 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9pvc\" (UniqueName: \"kubernetes.io/projected/95551b69-b405-4008-b600-7010cea057a2-kube-api-access-t9pvc\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.735160 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95551b69-b405-4008-b600-7010cea057a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.886764 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.902800 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:07:58 crc kubenswrapper[4760]: I0121 16:07:58.919787 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.049827 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") pod \"684f7edc-9176-4aeb-8b75-8f083ba14d04\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.049918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") pod \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050083 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") pod \"11004437-56c2-4e20-911b-e31d6726fabc\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050138 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") pod \"684f7edc-9176-4aeb-8b75-8f083ba14d04\" (UID: \"684f7edc-9176-4aeb-8b75-8f083ba14d04\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050231 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") pod \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\" (UID: \"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.050291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") pod \"11004437-56c2-4e20-911b-e31d6726fabc\" (UID: \"11004437-56c2-4e20-911b-e31d6726fabc\") " Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.052058 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11004437-56c2-4e20-911b-e31d6726fabc" (UID: "11004437-56c2-4e20-911b-e31d6726fabc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.052570 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" (UID: "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.053077 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "684f7edc-9176-4aeb-8b75-8f083ba14d04" (UID: "684f7edc-9176-4aeb-8b75-8f083ba14d04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.056518 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k" (OuterVolumeSpecName: "kube-api-access-bqq4k") pod "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" (UID: "b2bb047a-4a1f-4617-8d7a-66f80c84ea4a"). InnerVolumeSpecName "kube-api-access-bqq4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.058901 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg" (OuterVolumeSpecName: "kube-api-access-fc5xg") pod "684f7edc-9176-4aeb-8b75-8f083ba14d04" (UID: "684f7edc-9176-4aeb-8b75-8f083ba14d04"). InnerVolumeSpecName "kube-api-access-fc5xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.061975 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8" (OuterVolumeSpecName: "kube-api-access-9jgb8") pod "11004437-56c2-4e20-911b-e31d6726fabc" (UID: "11004437-56c2-4e20-911b-e31d6726fabc"). InnerVolumeSpecName "kube-api-access-9jgb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152244 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jgb8\" (UniqueName: \"kubernetes.io/projected/11004437-56c2-4e20-911b-e31d6726fabc-kube-api-access-9jgb8\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152274 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc5xg\" (UniqueName: \"kubernetes.io/projected/684f7edc-9176-4aeb-8b75-8f083ba14d04-kube-api-access-fc5xg\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152284 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152294 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11004437-56c2-4e20-911b-e31d6726fabc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152302 4760 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/684f7edc-9176-4aeb-8b75-8f083ba14d04-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.152312 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqq4k\" (UniqueName: \"kubernetes.io/projected/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a-kube-api-access-bqq4k\") on node \"crc\" DevicePath \"\"" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.197203 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6bgh" event={"ID":"11004437-56c2-4e20-911b-e31d6726fabc","Type":"ContainerDied","Data":"2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.197249 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2551d1b0aef05c4bec6521949678ede7e793e771824811a0dce89fc3c28a9d79" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.197271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6bgh" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.200163 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" event={"ID":"b2bb047a-4a1f-4617-8d7a-66f80c84ea4a","Type":"ContainerDied","Data":"3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.200221 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef2ebcbf962eba21493c8aa9a77895b978afd14c337cdd942578ee7cdae5b69" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.200191 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-49b7-account-create-update-cbnck" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.202631 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.202642 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2ffe-account-create-update-sq7hp" event={"ID":"95551b69-b405-4008-b600-7010cea057a2","Type":"ContainerDied","Data":"c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.202678 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c54e39d19eef14a65a751c5ea6a630e1b0a0b38c303bed0fbea2e73c7e08ec69" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.204834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" event={"ID":"684f7edc-9176-4aeb-8b75-8f083ba14d04","Type":"ContainerDied","Data":"97d41ba11ae35b3681fbfee8bb2cd6ffa01f48c0fe603807474b9ea5dd261efc"} Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.204885 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d41ba11ae35b3681fbfee8bb2cd6ffa01f48c0fe603807474b9ea5dd261efc" Jan 21 16:07:59 crc kubenswrapper[4760]: I0121 16:07:59.205114 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1b63-account-create-update-s5scn" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235257 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerStarted","Data":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235493 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" containerID="cri-o://18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235546 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235448 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" containerID="cri-o://dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235616 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" containerID="cri-o://22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.235627 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" containerID="cri-o://ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" gracePeriod=30 Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.252141 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.252198 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.266805 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.86349915 podStartE2EDuration="11.266775179s" podCreationTimestamp="2026-01-21 16:07:51 +0000 UTC" firstStartedPulling="2026-01-21 16:07:53.443786813 +0000 UTC m=+1244.111556391" lastFinishedPulling="2026-01-21 16:08:00.847062842 +0000 UTC m=+1251.514832420" observedRunningTime="2026-01-21 16:08:02.255172194 +0000 UTC m=+1252.922941782" watchObservedRunningTime="2026-01-21 16:08:02.266775179 +0000 UTC m=+1252.934544757" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.293021 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.300881 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.379143 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.379241 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.414122 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:08:02 crc kubenswrapper[4760]: I0121 16:08:02.434754 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.140013 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228441 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228519 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228602 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228721 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228909 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.228977 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") pod \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\" (UID: \"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc\") " Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.229917 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.230396 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.235604 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9" (OuterVolumeSpecName: "kube-api-access-zwhq9") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "kube-api-access-zwhq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.243710 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts" (OuterVolumeSpecName: "scripts") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.354064 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwhq9\" (UniqueName: \"kubernetes.io/projected/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-kube-api-access-zwhq9\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.407523 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458723 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" exitCode=2 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.407553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458798 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458823 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458840 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.407646 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458855 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458893 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458914 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.458764 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.459129 4760 generic.go:334] "Generic (PLEG): container finished" podID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" exitCode=0 Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fab1ad7-a70b-424e-85c2-9bfc7b6233fc","Type":"ContainerDied","Data":"225acde00eba57325c36a89c4f4f0390a34af1eb12a6c60f17cf37065932b7aa"} Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461388 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461402 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461621 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.461858 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.495359 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data" (OuterVolumeSpecName: "config-data") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.515437 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.525508 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" (UID: "0fab1ad7-a70b-424e-85c2-9bfc7b6233fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.560941 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.560983 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.560996 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.619706 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.686798 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.747643 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.776495 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.784652 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.784711 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.784748 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.785439 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.785609 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.785724 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.786107 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786140 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786163 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.786387 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786408 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786428 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786635 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786663 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786858 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.786887 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787105 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787130 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787344 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787372 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787593 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787620 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787862 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.787886 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.788135 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.788153 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.791963 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.792025 4760 scope.go:117] "RemoveContainer" containerID="18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.796455 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797029 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d"} err="failed to get container status \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": rpc error: code = NotFound desc = could not find container \"18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d\": container with ID starting with 18722d79bf62b3f9cb631acd294cf085a703b60095bd4b9c6fb01936bf15831d not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797086 4760 scope.go:117] "RemoveContainer" containerID="ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797414 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3"} err="failed to get container status \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": rpc error: code = NotFound desc = could not find container \"ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3\": container with ID starting with ab3cd7f31e2c3e34a83484c391607a086e596ad4358c3dc756862f9f818720e3 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.797438 4760 scope.go:117] "RemoveContainer" containerID="22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.801565 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406"} err="failed to get container status \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": rpc error: code = NotFound desc = could not find container \"22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406\": container with ID starting with 22f3e8b98b2e1d9cdc66b041638a164799b75e9b8934c9e3b3dc665b2d174406 not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.801913 4760 scope.go:117] "RemoveContainer" containerID="dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.803520 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c"} err="failed to get container status \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": rpc error: code = NotFound desc = could not find container \"dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c\": container with ID starting with dc188a613184dfef070509def992410ee7ff8057764a779ff9ed43ca888f828c not found: ID does not exist" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.835424 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846413 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.846822 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95551b69-b405-4008-b600-7010cea057a2" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846841 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="95551b69-b405-4008-b600-7010cea057a2" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.846856 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846863 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.846874 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.846880 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847127 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847139 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847163 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847171 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847191 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847199 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847212 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847219 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847231 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847248 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847260 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11004437-56c2-4e20-911b-e31d6726fabc" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847266 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="11004437-56c2-4e20-911b-e31d6726fabc" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: E0121 16:08:03.847277 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847283 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847637 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-central-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847653 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847663 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="proxy-httpd" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847672 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="95551b69-b405-4008-b600-7010cea057a2" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847683 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" containerName="mariadb-account-create-update" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847695 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="sg-core" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847712 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847722 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847729 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="11004437-56c2-4e20-911b-e31d6726fabc" containerName="mariadb-database-create" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.847736 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" containerName="ceilometer-notification-agent" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.864729 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.874798 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.879317 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.879637 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.879807 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979051 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979180 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979229 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979258 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979347 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979404 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979468 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:03 crc kubenswrapper[4760]: I0121 16:08:03.979521 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.015093 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:04 crc kubenswrapper[4760]: E0121 16:08:04.015915 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-kmvxw log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="69694845-95f8-4538-87a6-b1fc0929954e" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.069183 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.070414 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.082674 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.082705 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zgb7k" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.083912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.083966 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.083993 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084058 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084091 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084110 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084139 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084166 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.084704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.085691 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.086210 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.104399 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.110870 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.111480 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.117396 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.126946 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.129208 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.142498 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185546 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185678 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.185815 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.288201 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.289434 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.289605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.289802 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.294214 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.294455 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.295195 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.307886 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"nova-cell0-conductor-db-sync-kwcw6\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.470366 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.482911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.490502 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.595968 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596108 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596148 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596374 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596407 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596448 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596499 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") pod \"69694845-95f8-4538-87a6-b1fc0929954e\" (UID: \"69694845-95f8-4538-87a6-b1fc0929954e\") " Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.596832 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.597305 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.599056 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.601681 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data" (OuterVolumeSpecName: "config-data") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.603742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw" (OuterVolumeSpecName: "kube-api-access-kmvxw") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "kube-api-access-kmvxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.607477 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.608760 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.610488 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.614444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts" (OuterVolumeSpecName: "scripts") pod "69694845-95f8-4538-87a6-b1fc0929954e" (UID: "69694845-95f8-4538-87a6-b1fc0929954e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.700614 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/69694845-95f8-4538-87a6-b1fc0929954e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701099 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmvxw\" (UniqueName: \"kubernetes.io/projected/69694845-95f8-4538-87a6-b1fc0929954e-kube-api-access-kmvxw\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701113 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701122 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701135 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701143 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.701155 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69694845-95f8-4538-87a6-b1fc0929954e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:04 crc kubenswrapper[4760]: W0121 16:08:04.869991 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98bcfa69_f25f_4f8a_8018_664dbdf6e1d3.slice/crio-e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134 WatchSource:0}: Error finding container e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134: Status 404 returned error can't find the container with id e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134 Jan 21 16:08:04 crc kubenswrapper[4760]: I0121 16:08:04.871507 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.079425 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.480615 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486427 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerStarted","Data":"e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134"} Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486524 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486534 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486661 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.486686 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.566065 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.608475 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.613349 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.616892 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.624201 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.625024 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.624906 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.624962 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.668767 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fab1ad7-a70b-424e-85c2-9bfc7b6233fc" path="/var/lib/kubelet/pods/0fab1ad7-a70b-424e-85c2-9bfc7b6233fc/volumes" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.670116 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69694845-95f8-4538-87a6-b1fc0929954e" path="/var/lib/kubelet/pods/69694845-95f8-4538-87a6-b1fc0929954e/volumes" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.708838 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779300 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779780 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779867 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.779903 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.780112 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.780233 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.780315 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.883197 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.883757 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.883898 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884093 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884723 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884837 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.884941 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.885052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.885416 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.885711 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.891383 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.891486 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.891978 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.895835 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.896413 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.910940 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"ceilometer-0\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " pod="openstack/ceilometer-0" Jan 21 16:08:05 crc kubenswrapper[4760]: I0121 16:08:05.949434 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.516798 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.560712 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.587428 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.650671 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.650761 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.736151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.736296 4760 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 16:08:06 crc kubenswrapper[4760]: I0121 16:08:06.738988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 16:08:07 crc kubenswrapper[4760]: I0121 16:08:07.512516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"ec159603ba90823a3365650c3997dcd138113f8b3a649999d2688320a497fe8f"} Jan 21 16:08:08 crc kubenswrapper[4760]: I0121 16:08:08.527172 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c"} Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.430257 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.580319 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba"} Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.584588 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5c9896dc76-gwrzv" Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.704552 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.704977 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" containerID="cri-o://dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549" gracePeriod=30 Jan 21 16:08:09 crc kubenswrapper[4760]: I0121 16:08:09.705219 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" containerID="cri-o://b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74" gracePeriod=30 Jan 21 16:08:12 crc kubenswrapper[4760]: I0121 16:08:12.952629 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:08:13 crc kubenswrapper[4760]: I0121 16:08:13.635817 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ce8d17c-d046-45b5-9136-6faca838de63" containerID="b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74" exitCode=0 Jan 21 16:08:13 crc kubenswrapper[4760]: I0121 16:08:13.640628 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74"} Jan 21 16:08:13 crc kubenswrapper[4760]: I0121 16:08:13.640706 4760 scope.go:117] "RemoveContainer" containerID="c3bc057180aff5b7f74696812035164d3822f5c925dea41492a6a319d6faaf1f" Jan 21 16:08:15 crc kubenswrapper[4760]: I0121 16:08:15.343448 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:18 crc kubenswrapper[4760]: I0121 16:08:18.703487 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9"} Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.723259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerStarted","Data":"de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010"} Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.847193 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" podStartSLOduration=1.4189986430000001 podStartE2EDuration="16.847172857s" podCreationTimestamp="2026-01-21 16:08:04 +0000 UTC" firstStartedPulling="2026-01-21 16:08:04.877071891 +0000 UTC m=+1255.544841469" lastFinishedPulling="2026-01-21 16:08:20.305246105 +0000 UTC m=+1270.973015683" observedRunningTime="2026-01-21 16:08:20.73891122 +0000 UTC m=+1271.406680818" watchObservedRunningTime="2026-01-21 16:08:20.847172857 +0000 UTC m=+1271.514942435" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946077 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946146 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946195 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946741 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:08:20 crc kubenswrapper[4760]: I0121 16:08:20.946798 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df" gracePeriod=600 Jan 21 16:08:21 crc kubenswrapper[4760]: E0121 16:08:21.182191 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd365e7_570c_4130_a299_30e376624ce2.slice/crio-da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dd365e7_570c_4130_a299_30e376624ce2.slice/crio-conmon-da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734595 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerStarted","Data":"8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7"} Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.735090 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734765 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" containerID="cri-o://9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734784 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" containerID="cri-o://5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734783 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" containerID="cri-o://8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.734724 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" containerID="cri-o://3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c" gracePeriod=30 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.752266 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df" exitCode=0 Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.753033 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df"} Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.753116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41"} Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.753141 4760 scope.go:117] "RemoveContainer" containerID="d46da82d10de2ad82e008f50494383d7547214baecf8965338b3787de8bae17f" Jan 21 16:08:21 crc kubenswrapper[4760]: I0121 16:08:21.769793 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.444964501 podStartE2EDuration="16.769771523s" podCreationTimestamp="2026-01-21 16:08:05 +0000 UTC" firstStartedPulling="2026-01-21 16:08:06.547315878 +0000 UTC m=+1257.215085456" lastFinishedPulling="2026-01-21 16:08:20.872122899 +0000 UTC m=+1271.539892478" observedRunningTime="2026-01-21 16:08:21.765925499 +0000 UTC m=+1272.433695087" watchObservedRunningTime="2026-01-21 16:08:21.769771523 +0000 UTC m=+1272.437541101" Jan 21 16:08:22 crc kubenswrapper[4760]: I0121 16:08:22.765984 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7" exitCode=0 Jan 21 16:08:22 crc kubenswrapper[4760]: I0121 16:08:22.766212 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7"} Jan 21 16:08:22 crc kubenswrapper[4760]: I0121 16:08:22.952780 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.792277 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9" exitCode=2 Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793513 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba" exitCode=0 Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793627 4760 generic.go:334] "Generic (PLEG): container finished" podID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerID="3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c" exitCode=0 Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.792359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9"} Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba"} Jan 21 16:08:23 crc kubenswrapper[4760]: I0121 16:08:23.793964 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c"} Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.144642 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240751 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240827 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240880 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.240955 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241071 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") pod \"c6e922ca-084a-4602-85c5-b97abdb8794b\" (UID: \"c6e922ca-084a-4602-85c5-b97abdb8794b\") " Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241744 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.241884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.248783 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf" (OuterVolumeSpecName: "kube-api-access-hpbcf") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "kube-api-access-hpbcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.252566 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts" (OuterVolumeSpecName: "scripts") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.280253 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.348742 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349118 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349132 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpbcf\" (UniqueName: \"kubernetes.io/projected/c6e922ca-084a-4602-85c5-b97abdb8794b-kube-api-access-hpbcf\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349142 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.349153 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6e922ca-084a-4602-85c5-b97abdb8794b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.350454 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.443514 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data" (OuterVolumeSpecName: "config-data") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.448503 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6e922ca-084a-4602-85c5-b97abdb8794b" (UID: "c6e922ca-084a-4602-85c5-b97abdb8794b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.454316 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.454376 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.454389 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e922ca-084a-4602-85c5-b97abdb8794b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.810445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6e922ca-084a-4602-85c5-b97abdb8794b","Type":"ContainerDied","Data":"ec159603ba90823a3365650c3997dcd138113f8b3a649999d2688320a497fe8f"} Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.810511 4760 scope.go:117] "RemoveContainer" containerID="8352b6347c9c05151244386126e1b7ea320a60ef51f4af2bad3490ca48ba31a7" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.810652 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.878257 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.890022 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.892305 4760 scope.go:117] "RemoveContainer" containerID="9f91317e3bbc668b5c24b5c178247e6e4e9994528d9ce111646626c15b6d28b9" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.916546 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.916982 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.916999 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.917027 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917034 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.917047 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917056 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: E0121 16:08:24.917075 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917083 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917302 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-central-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917350 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="ceilometer-notification-agent" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917367 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="proxy-httpd" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.917378 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" containerName="sg-core" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.923472 4760 scope.go:117] "RemoveContainer" containerID="5e7bf345a4fb4dd610c003da1b0ecd3008c561d466cbf9f0eed8ea21d215abba" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.925839 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.930966 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.931955 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.932172 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.955941 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:24 crc kubenswrapper[4760]: I0121 16:08:24.975494 4760 scope.go:117] "RemoveContainer" containerID="3f7e4a3cd7987281ae2b0f0c77240c1d07750619e155def8a5ab0e9e3af4932c" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.066248 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.066843 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.066906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067011 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067034 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067060 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067139 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.067167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169269 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169373 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169421 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169500 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169551 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.169603 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.170486 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.170492 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.173843 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.175666 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.175834 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.176110 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.189405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.191961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"ceilometer-0\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.274867 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.559250 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:25 crc kubenswrapper[4760]: W0121 16:08:25.561278 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d49331f_5dcf_4dc7_9f48_349473739b05.slice/crio-d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8 WatchSource:0}: Error finding container d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8: Status 404 returned error can't find the container with id d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8 Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.635457 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e922ca-084a-4602-85c5-b97abdb8794b" path="/var/lib/kubelet/pods/c6e922ca-084a-4602-85c5-b97abdb8794b/volumes" Jan 21 16:08:25 crc kubenswrapper[4760]: I0121 16:08:25.822634 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8"} Jan 21 16:08:26 crc kubenswrapper[4760]: I0121 16:08:26.835925 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2"} Jan 21 16:08:27 crc kubenswrapper[4760]: I0121 16:08:27.112720 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:27 crc kubenswrapper[4760]: I0121 16:08:27.857731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f"} Jan 21 16:08:28 crc kubenswrapper[4760]: I0121 16:08:28.886988 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402"} Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027120 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerStarted","Data":"c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334"} Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027385 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" containerID="cri-o://e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027700 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" containerID="cri-o://5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027697 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" containerID="cri-o://6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.028040 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.027738 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" containerID="cri-o://c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334" gracePeriod=30 Jan 21 16:08:31 crc kubenswrapper[4760]: I0121 16:08:31.070042 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.46135644 podStartE2EDuration="7.069920911s" podCreationTimestamp="2026-01-21 16:08:24 +0000 UTC" firstStartedPulling="2026-01-21 16:08:25.565608554 +0000 UTC m=+1276.233378132" lastFinishedPulling="2026-01-21 16:08:30.174173035 +0000 UTC m=+1280.841942603" observedRunningTime="2026-01-21 16:08:31.060269044 +0000 UTC m=+1281.728038632" watchObservedRunningTime="2026-01-21 16:08:31.069920911 +0000 UTC m=+1281.737690499" Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.045092 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334" exitCode=0 Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046033 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402" exitCode=2 Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046049 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f" exitCode=0 Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.045281 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334"} Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046121 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402"} Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.046139 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f"} Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.952839 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-789c75ff48-s7f9p" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Jan 21 16:08:32 crc kubenswrapper[4760]: I0121 16:08:32.953646 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:39 crc kubenswrapper[4760]: I0121 16:08:39.128298 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerID="e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2" exitCode=0 Jan 21 16:08:39 crc kubenswrapper[4760]: I0121 16:08:39.128392 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2"} Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.088216 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.144357 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ce8d17c-d046-45b5-9136-6faca838de63" containerID="dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549" exitCode=137 Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.144462 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549"} Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.160423 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7d49331f-5dcf-4dc7-9f48-349473739b05","Type":"ContainerDied","Data":"d50dbecfaa1ab7045513c6d4844df29e77d2d1cb1d5200316fb73aea673104e8"} Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.160783 4760 scope.go:117] "RemoveContainer" containerID="c61f0bc991152c5e56ab800164cafa7a56f0a79d7b282faca083db8b87a3d334" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.161158 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.162912 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.162992 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163045 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163116 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163211 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163241 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163297 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163364 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") pod \"7d49331f-5dcf-4dc7-9f48-349473739b05\" (UID: \"7d49331f-5dcf-4dc7-9f48-349473739b05\") " Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.163618 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.164088 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.164767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.171025 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2" (OuterVolumeSpecName: "kube-api-access-54tq2") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "kube-api-access-54tq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.171684 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts" (OuterVolumeSpecName: "scripts") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.193942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.254626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266645 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d49331f-5dcf-4dc7-9f48-349473739b05-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266681 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266693 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266704 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.266721 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54tq2\" (UniqueName: \"kubernetes.io/projected/7d49331f-5dcf-4dc7-9f48-349473739b05-kube-api-access-54tq2\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.277770 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.287779 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data" (OuterVolumeSpecName: "config-data") pod "7d49331f-5dcf-4dc7-9f48-349473739b05" (UID: "7d49331f-5dcf-4dc7-9f48-349473739b05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.293788 4760 scope.go:117] "RemoveContainer" containerID="5f1fa992e3540d242aaa706e606ee7d658f82396471d40dd2647de0eac793402" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.316442 4760 scope.go:117] "RemoveContainer" containerID="6e43ce0afcec946cb945ecbca9f187584bd430a0cdd795c7daa5242a8f72dd4f" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.337191 4760 scope.go:117] "RemoveContainer" containerID="e407805d038c55767aad3dac8729f1cdfaa767c83ffc8ef44ae05a6024899dd2" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.369490 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.369558 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d49331f-5dcf-4dc7-9f48-349473739b05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.512783 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.549759 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568291 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568831 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568861 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568875 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568883 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568909 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568915 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: E0121 16:08:40.568925 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.568931 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569126 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="sg-core" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569150 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="proxy-httpd" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569158 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-notification-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.569171 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" containerName="ceilometer-central-agent" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.571177 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.573288 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.574365 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.576224 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.597010 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.675173 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.675882 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676080 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676476 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676511 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.676634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778491 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778558 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778585 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778671 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778699 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778728 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.778750 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.779443 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.779600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.785884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.785894 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.786348 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.790092 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.795539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.798756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"ceilometer-0\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " pod="openstack/ceilometer-0" Jan 21 16:08:40 crc kubenswrapper[4760]: I0121 16:08:40.893631 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.029128 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099666 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099709 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099771 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.099970 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.100014 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.100037 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") pod \"9ce8d17c-d046-45b5-9136-6faca838de63\" (UID: \"9ce8d17c-d046-45b5-9136-6faca838de63\") " Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.101254 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs" (OuterVolumeSpecName: "logs") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.120584 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.132235 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt" (OuterVolumeSpecName: "kube-api-access-k2qdt") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "kube-api-access-k2qdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.139474 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts" (OuterVolumeSpecName: "scripts") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.164943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.179525 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.190108 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-789c75ff48-s7f9p" event={"ID":"9ce8d17c-d046-45b5-9136-6faca838de63","Type":"ContainerDied","Data":"99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2"} Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.190200 4760 scope.go:117] "RemoveContainer" containerID="b0561fc99b64223a07d0ada5779e7047e4dd9e196ab449c4b0befd20ca184b74" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.190209 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-789c75ff48-s7f9p" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.196958 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data" (OuterVolumeSpecName: "config-data") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.199220 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9ce8d17c-d046-45b5-9136-6faca838de63" (UID: "9ce8d17c-d046-45b5-9136-6faca838de63"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.202988 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ce8d17c-d046-45b5-9136-6faca838de63-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203015 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203030 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203045 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ce8d17c-d046-45b5-9136-6faca838de63-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203057 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203070 4760 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce8d17c-d046-45b5-9136-6faca838de63-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.203083 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2qdt\" (UniqueName: \"kubernetes.io/projected/9ce8d17c-d046-45b5-9136-6faca838de63-kube-api-access-k2qdt\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.535447 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.543509 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-789c75ff48-s7f9p"] Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.552138 4760 scope.go:117] "RemoveContainer" containerID="dbc7df94dfd0bf190529b48e0582f7e96d1e1d6a91f71b8e6cbcbced81b5b549" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.647019 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d49331f-5dcf-4dc7-9f48-349473739b05" path="/var/lib/kubelet/pods/7d49331f-5dcf-4dc7-9f48-349473739b05/volumes" Jan 21 16:08:41 crc kubenswrapper[4760]: I0121 16:08:41.656096 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" path="/var/lib/kubelet/pods/9ce8d17c-d046-45b5-9136-6faca838de63/volumes" Jan 21 16:08:41 crc kubenswrapper[4760]: E0121 16:08:41.702986 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce8d17c_d046_45b5_9136_6faca838de63.slice/crio-99845b706fb042dde05e1b648b3f5ce119d2a8e33b829322f4ebb2d5ed2d32b2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce8d17c_d046_45b5_9136_6faca838de63.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:08:42 crc kubenswrapper[4760]: I0121 16:08:42.202012 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36"} Jan 21 16:08:42 crc kubenswrapper[4760]: I0121 16:08:42.202571 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"8be89513ab5b47099cb5f4a9b8dfe36c7195061b61e6e75962374d77612636bf"} Jan 21 16:08:44 crc kubenswrapper[4760]: I0121 16:08:44.222789 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37"} Jan 21 16:08:45 crc kubenswrapper[4760]: I0121 16:08:45.232982 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777"} Jan 21 16:08:47 crc kubenswrapper[4760]: I0121 16:08:47.257306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerStarted","Data":"64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771"} Jan 21 16:08:47 crc kubenswrapper[4760]: I0121 16:08:47.257976 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:08:47 crc kubenswrapper[4760]: I0121 16:08:47.286172 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.911650206 podStartE2EDuration="7.286142796s" podCreationTimestamp="2026-01-21 16:08:40 +0000 UTC" firstStartedPulling="2026-01-21 16:08:41.204876341 +0000 UTC m=+1291.872645919" lastFinishedPulling="2026-01-21 16:08:46.579368931 +0000 UTC m=+1297.247138509" observedRunningTime="2026-01-21 16:08:47.280171819 +0000 UTC m=+1297.947941397" watchObservedRunningTime="2026-01-21 16:08:47.286142796 +0000 UTC m=+1297.953912374" Jan 21 16:08:48 crc kubenswrapper[4760]: I0121 16:08:48.269541 4760 generic.go:334] "Generic (PLEG): container finished" podID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerID="de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010" exitCode=0 Jan 21 16:08:48 crc kubenswrapper[4760]: I0121 16:08:48.270841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerDied","Data":"de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010"} Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.825002 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891762 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891849 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891941 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.891977 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") pod \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\" (UID: \"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3\") " Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.897630 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts" (OuterVolumeSpecName: "scripts") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.897711 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh" (OuterVolumeSpecName: "kube-api-access-rvgsh") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "kube-api-access-rvgsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.920734 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.921370 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data" (OuterVolumeSpecName: "config-data") pod "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" (UID: "98bcfa69-f25f-4f8a-8018-664dbdf6e1d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993796 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993847 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993859 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvgsh\" (UniqueName: \"kubernetes.io/projected/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-kube-api-access-rvgsh\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:49 crc kubenswrapper[4760]: I0121 16:08:49.993873 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.288464 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" event={"ID":"98bcfa69-f25f-4f8a-8018-664dbdf6e1d3","Type":"ContainerDied","Data":"e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134"} Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.288521 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2940918a72e34441a85a4d61057ea04055ab2b4a9c608282b33cbd8908a4134" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.288524 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kwcw6" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417158 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417576 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417598 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417635 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerName="nova-cell0-conductor-db-sync" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417645 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerName="nova-cell0-conductor-db-sync" Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417659 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417666 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: E0121 16:08:50.417685 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417695 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417854 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417866 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" containerName="nova-cell0-conductor-db-sync" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417878 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.417895 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce8d17c-d046-45b5-9136-6faca838de63" containerName="horizon-log" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.418588 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.420950 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.422347 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zgb7k" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.425553 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.501725 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.502078 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.502375 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pzx\" (UniqueName: \"kubernetes.io/projected/56d015a2-9a67-4f44-a726-21949444f11b-kube-api-access-d4pzx\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.605272 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.605448 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pzx\" (UniqueName: \"kubernetes.io/projected/56d015a2-9a67-4f44-a726-21949444f11b-kube-api-access-d4pzx\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.605563 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.610735 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.611609 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d015a2-9a67-4f44-a726-21949444f11b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.628687 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pzx\" (UniqueName: \"kubernetes.io/projected/56d015a2-9a67-4f44-a726-21949444f11b-kube-api-access-d4pzx\") pod \"nova-cell0-conductor-0\" (UID: \"56d015a2-9a67-4f44-a726-21949444f11b\") " pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:50 crc kubenswrapper[4760]: I0121 16:08:50.735205 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:51 crc kubenswrapper[4760]: I0121 16:08:51.217802 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 16:08:51 crc kubenswrapper[4760]: I0121 16:08:51.298989 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"56d015a2-9a67-4f44-a726-21949444f11b","Type":"ContainerStarted","Data":"ba3873cc9108087c3f0b602de901a61d9204c48419e03b5b889b0256506c897d"} Jan 21 16:08:52 crc kubenswrapper[4760]: I0121 16:08:52.313794 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"56d015a2-9a67-4f44-a726-21949444f11b","Type":"ContainerStarted","Data":"9daae352463cb8673622d9e26b58f0ed71583038a64957298c2c5816060f6337"} Jan 21 16:08:52 crc kubenswrapper[4760]: I0121 16:08:52.314676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 16:08:52 crc kubenswrapper[4760]: I0121 16:08:52.340289 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.340258251 podStartE2EDuration="2.340258251s" podCreationTimestamp="2026-01-21 16:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:08:52.33171613 +0000 UTC m=+1302.999485718" watchObservedRunningTime="2026-01-21 16:08:52.340258251 +0000 UTC m=+1303.008027819" Jan 21 16:09:00 crc kubenswrapper[4760]: I0121 16:09:00.768471 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.293216 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.294449 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.297417 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.297634 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.306673 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.480706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.480847 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.480917 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.481168 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.516073 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.518589 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: W0121 16:09:01.524244 4760 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: secrets "nova-api-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 21 16:09:01 crc kubenswrapper[4760]: E0121 16:09:01.524302 4760 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"nova-api-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.526372 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.528912 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.539030 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585743 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585828 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585876 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.585961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586043 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586077 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586114 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586193 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.586238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.601425 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.625002 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.626288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.629317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.652297 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"nova-cell0-cell-mapping-nfqg4\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.678792 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.688304 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709767 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709825 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709896 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.709953 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.710154 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.711885 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.712257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.703936 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.714246 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.724712 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.725638 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.742974 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.747467 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.760688 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.773226 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.788317 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.793268 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.801059 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"nova-metadata-0\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.809124 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.810746 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.812818 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.812961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.813028 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.821282 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.851448 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.861115 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.873993 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.914231 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915792 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915872 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915930 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915961 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.915990 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916046 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916084 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916179 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916201 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916231 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.916301 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.929013 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.947756 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:01 crc kubenswrapper[4760]: I0121 16:09:01.957649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"nova-cell1-novncproxy-0\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024637 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024733 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024763 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024814 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024924 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.024971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.025009 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.028480 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.035288 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.036010 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.036066 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.037128 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.054098 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.055545 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.056879 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"nova-scheduler-0\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.073041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"dnsmasq-dns-bccf8f775-pvm44\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.209271 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.225668 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.305605 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.548617 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.549948 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.553752 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.554160 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.560419 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636106 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636177 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636239 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.636303 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: E0121 16:09:02.690273 4760 secret.go:188] Couldn't get secret openstack/nova-api-config-data: failed to sync secret cache: timed out waiting for the condition Jan 21 16:09:02 crc kubenswrapper[4760]: E0121 16:09:02.690394 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data podName:e3cb009b-917a-4689-85cc-6d1a4669ebb5 nodeName:}" failed. No retries permitted until 2026-01-21 16:09:03.190370694 +0000 UTC m=+1313.858140272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data") pod "nova-api-0" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5") : failed to sync secret cache: timed out waiting for the condition Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.737786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.737893 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.737936 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.738043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.747806 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.753174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.756128 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.756662 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"nova-cell1-conductor-db-sync-jc7z7\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:02 crc kubenswrapper[4760]: I0121 16:09:02.868581 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.029537 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.288610 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.293668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"nova-api-0\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " pod="openstack/nova-api-0" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.354203 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.446831 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.486671 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:03 crc kubenswrapper[4760]: W0121 16:09:03.500338 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc896aef3_816a_45b4_80fc_f21db51900ad.slice/crio-1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8 WatchSource:0}: Error finding container 1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8: Status 404 returned error can't find the container with id 1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8 Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.507910 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.534371 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.557512 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.615343 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:09:03 crc kubenswrapper[4760]: W0121 16:09:03.908753 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3cb009b_917a_4689_85cc_6d1a4669ebb5.slice/crio-20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef WatchSource:0}: Error finding container 20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef: Status 404 returned error can't find the container with id 20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef Jan 21 16:09:03 crc kubenswrapper[4760]: I0121 16:09:03.911523 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.444306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerStarted","Data":"4ddbd595cd741bef3bad7514c7e152e808f421d819a25fc9d418f4e68507474d"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.446902 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerStarted","Data":"4668563eeac781e527ddeca3a577821c300ada3336e66292dfe89a3ded8156d3"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.448007 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerStarted","Data":"0231afbb84218aca62d55a801072377388e58f126925cc7b8176dc2a7b013ec3"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.451739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerStarted","Data":"c94932625151bb09239d5c05a255a61766b1996484f31229aafa584fd1d0b8ab"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.453375 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerStarted","Data":"1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.454421 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerStarted","Data":"20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef"} Jan 21 16:09:04 crc kubenswrapper[4760]: I0121 16:09:04.456543 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerStarted","Data":"97db2f0eeee1a5c6ea6021e6e663bd6831e3994573b7078a5fd61116591750c6"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.311772 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.320026 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.473125 4760 generic.go:334] "Generic (PLEG): container finished" podID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerID="75eac5402095150f6e05a324475d5637a69a1ae97bf4ba4251a0a215dc883cec" exitCode=0 Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.474141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerDied","Data":"75eac5402095150f6e05a324475d5637a69a1ae97bf4ba4251a0a215dc883cec"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.482577 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerStarted","Data":"84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.487666 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerStarted","Data":"ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394"} Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.525623 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nfqg4" podStartSLOduration=4.525600389 podStartE2EDuration="4.525600389s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:05.523951628 +0000 UTC m=+1316.191721206" watchObservedRunningTime="2026-01-21 16:09:05.525600389 +0000 UTC m=+1316.193369967" Jan 21 16:09:05 crc kubenswrapper[4760]: I0121 16:09:05.553225 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" podStartSLOduration=3.55319522 podStartE2EDuration="3.55319522s" podCreationTimestamp="2026-01-21 16:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:05.537016751 +0000 UTC m=+1316.204786329" watchObservedRunningTime="2026-01-21 16:09:05.55319522 +0000 UTC m=+1316.220964798" Jan 21 16:09:06 crc kubenswrapper[4760]: I0121 16:09:06.506030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerStarted","Data":"1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894"} Jan 21 16:09:06 crc kubenswrapper[4760]: I0121 16:09:06.508350 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:06 crc kubenswrapper[4760]: I0121 16:09:06.560831 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" podStartSLOduration=5.560806591 podStartE2EDuration="5.560806591s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:06.547904683 +0000 UTC m=+1317.215674261" watchObservedRunningTime="2026-01-21 16:09:06.560806591 +0000 UTC m=+1317.228576169" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.542581 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerStarted","Data":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.543243 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerStarted","Data":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.542941 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" containerID="cri-o://e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" gracePeriod=30 Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.542688 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" containerID="cri-o://f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" gracePeriod=30 Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.544293 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerStarted","Data":"9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.550137 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerStarted","Data":"aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.550260 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8" gracePeriod=30 Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.552304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerStarted","Data":"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.552347 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerStarted","Data":"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d"} Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.575763 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.915050348 podStartE2EDuration="9.575728266s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.578765283 +0000 UTC m=+1314.246534861" lastFinishedPulling="2026-01-21 16:09:09.239443201 +0000 UTC m=+1319.907212779" observedRunningTime="2026-01-21 16:09:10.568684012 +0000 UTC m=+1321.236453610" watchObservedRunningTime="2026-01-21 16:09:10.575728266 +0000 UTC m=+1321.243497844" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.630655 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.970047334 podStartE2EDuration="9.630634651s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.577978603 +0000 UTC m=+1314.245748211" lastFinishedPulling="2026-01-21 16:09:09.23856595 +0000 UTC m=+1319.906335528" observedRunningTime="2026-01-21 16:09:10.592250873 +0000 UTC m=+1321.260020451" watchObservedRunningTime="2026-01-21 16:09:10.630634651 +0000 UTC m=+1321.298404229" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.631247 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.302256365 podStartE2EDuration="9.631242416s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.918310164 +0000 UTC m=+1314.586079752" lastFinishedPulling="2026-01-21 16:09:09.247296225 +0000 UTC m=+1319.915065803" observedRunningTime="2026-01-21 16:09:10.61846052 +0000 UTC m=+1321.286230108" watchObservedRunningTime="2026-01-21 16:09:10.631242416 +0000 UTC m=+1321.299011994" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.661361 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.927709299 podStartE2EDuration="9.660020746s" podCreationTimestamp="2026-01-21 16:09:01 +0000 UTC" firstStartedPulling="2026-01-21 16:09:03.506242712 +0000 UTC m=+1314.174012290" lastFinishedPulling="2026-01-21 16:09:09.238554159 +0000 UTC m=+1319.906323737" observedRunningTime="2026-01-21 16:09:10.646275117 +0000 UTC m=+1321.314044695" watchObservedRunningTime="2026-01-21 16:09:10.660020746 +0000 UTC m=+1321.327790324" Jan 21 16:09:10 crc kubenswrapper[4760]: I0121 16:09:10.922396 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.193455 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.340277 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341215 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341567 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs" (OuterVolumeSpecName: "logs") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341630 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.341750 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") pod \"0221f422-acd4-4933-a761-d206c007f5db\" (UID: \"0221f422-acd4-4933-a761-d206c007f5db\") " Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.342508 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0221f422-acd4-4933-a761-d206c007f5db-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.361253 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4" (OuterVolumeSpecName: "kube-api-access-7v6d4") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "kube-api-access-7v6d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.373736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data" (OuterVolumeSpecName: "config-data") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.374315 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0221f422-acd4-4933-a761-d206c007f5db" (UID: "0221f422-acd4-4933-a761-d206c007f5db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.443388 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.443425 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0221f422-acd4-4933-a761-d206c007f5db-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.443439 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6d4\" (UniqueName: \"kubernetes.io/projected/0221f422-acd4-4933-a761-d206c007f5db-kube-api-access-7v6d4\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.566061 4760 generic.go:334] "Generic (PLEG): container finished" podID="0221f422-acd4-4933-a761-d206c007f5db" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" exitCode=0 Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.566097 4760 generic.go:334] "Generic (PLEG): container finished" podID="0221f422-acd4-4933-a761-d206c007f5db" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" exitCode=143 Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.566527 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567834 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerDied","Data":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567873 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerDied","Data":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567885 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0221f422-acd4-4933-a761-d206c007f5db","Type":"ContainerDied","Data":"4ddbd595cd741bef3bad7514c7e152e808f421d819a25fc9d418f4e68507474d"} Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.567903 4760 scope.go:117] "RemoveContainer" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.608749 4760 scope.go:117] "RemoveContainer" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.633607 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.636439 4760 scope.go:117] "RemoveContainer" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.637107 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": container with ID starting with e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe not found: ID does not exist" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637153 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} err="failed to get container status \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": rpc error: code = NotFound desc = could not find container \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": container with ID starting with e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637190 4760 scope.go:117] "RemoveContainer" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637207 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.637651 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": container with ID starting with f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e not found: ID does not exist" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637688 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} err="failed to get container status \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": rpc error: code = NotFound desc = could not find container \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": container with ID starting with f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.637711 4760 scope.go:117] "RemoveContainer" containerID="e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.638423 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe"} err="failed to get container status \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": rpc error: code = NotFound desc = could not find container \"e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe\": container with ID starting with e3522c248b0e6b9e6d2a8484aca7f19bbe875cff2e4a1d83e8187399cb5c6efe not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.638446 4760 scope.go:117] "RemoveContainer" containerID="f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.638870 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e"} err="failed to get container status \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": rpc error: code = NotFound desc = could not find container \"f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e\": container with ID starting with f1e3ec96921198de37682aa71236fd3b8ba2836ed5470f9cb8bc5d033bed1c8e not found: ID does not exist" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.651000 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.651421 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652607 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" Jan 21 16:09:11 crc kubenswrapper[4760]: E0121 16:09:11.652635 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652655 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652919 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-metadata" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.652946 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0221f422-acd4-4933-a761-d206c007f5db" containerName="nova-metadata-log" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.655782 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.658689 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.659147 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.677985 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.749274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.749970 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.750115 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.750227 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.750390 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852626 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852729 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852789 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.852820 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.853404 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.857970 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.858552 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.862141 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.882019 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"nova-metadata-0\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " pod="openstack/nova-metadata-0" Jan 21 16:09:11 crc kubenswrapper[4760]: I0121 16:09:11.980863 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.210415 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.229562 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.301826 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.302457 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" containerID="cri-o://ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f" gracePeriod=10 Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.310505 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.311051 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.367613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.538674 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.597504 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerStarted","Data":"74ea022fa8d55d16b391886f8a6ff9acda3aecd426fbca239fb736c3047b9b66"} Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.609632 4760 generic.go:334] "Generic (PLEG): container finished" podID="28ae7881-d794-4020-ae6d-a192927d75c8" containerID="ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f" exitCode=0 Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.610024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerDied","Data":"ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f"} Jan 21 16:09:12 crc kubenswrapper[4760]: E0121 16:09:12.652470 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae7881_d794_4020_ae6d_a192927d75c8.slice/crio-conmon-ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae7881_d794_4020_ae6d_a192927d75c8.slice/crio-ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.656712 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:09:12 crc kubenswrapper[4760]: I0121 16:09:12.879735 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.078895 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079376 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079557 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079606 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.079699 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") pod \"28ae7881-d794-4020-ae6d-a192927d75c8\" (UID: \"28ae7881-d794-4020-ae6d-a192927d75c8\") " Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.094506 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v" (OuterVolumeSpecName: "kube-api-access-qkl4v") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "kube-api-access-qkl4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.144825 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.154151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config" (OuterVolumeSpecName: "config") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.157719 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.170902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.173066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28ae7881-d794-4020-ae6d-a192927d75c8" (UID: "28ae7881-d794-4020-ae6d-a192927d75c8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189469 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189508 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkl4v\" (UniqueName: \"kubernetes.io/projected/28ae7881-d794-4020-ae6d-a192927d75c8-kube-api-access-qkl4v\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189519 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189528 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189537 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.189546 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28ae7881-d794-4020-ae6d-a192927d75c8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.357516 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.357559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.657651 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.659264 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0221f422-acd4-4933-a761-d206c007f5db" path="/var/lib/kubelet/pods/0221f422-acd4-4933-a761-d206c007f5db/volumes" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerStarted","Data":"0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d"} Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerStarted","Data":"6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0"} Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660145 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-v57lc" event={"ID":"28ae7881-d794-4020-ae6d-a192927d75c8","Type":"ContainerDied","Data":"32fcd1b60d2d86a5acc94c6a6bc2f951249985413b27086c207699de1e47a6c2"} Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.660171 4760 scope.go:117] "RemoveContainer" containerID="ff67fd5ca06d840d69b08678dbd65581a03876594be36121c532a55770c8469f" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.700709 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.700686572 podStartE2EDuration="2.700686572s" podCreationTimestamp="2026-01-21 16:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:13.68723704 +0000 UTC m=+1324.355006638" watchObservedRunningTime="2026-01-21 16:09:13.700686572 +0000 UTC m=+1324.368456150" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.706635 4760 scope.go:117] "RemoveContainer" containerID="d1c1490964aac721fec04529370c88c8f8ac1caecbb735bae40378ec6315de8a" Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.762380 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:09:13 crc kubenswrapper[4760]: I0121 16:09:13.776437 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-v57lc"] Jan 21 16:09:14 crc kubenswrapper[4760]: I0121 16:09:14.437609 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:14 crc kubenswrapper[4760]: I0121 16:09:14.437752 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:15 crc kubenswrapper[4760]: I0121 16:09:15.636838 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" path="/var/lib/kubelet/pods/28ae7881-d794-4020-ae6d-a192927d75c8/volumes" Jan 21 16:09:15 crc kubenswrapper[4760]: I0121 16:09:15.685919 4760 generic.go:334] "Generic (PLEG): container finished" podID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerID="ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394" exitCode=0 Jan 21 16:09:15 crc kubenswrapper[4760]: I0121 16:09:15.685963 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerDied","Data":"ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394"} Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.697777 4760 generic.go:334] "Generic (PLEG): container finished" podID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerID="84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7" exitCode=0 Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.697865 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerDied","Data":"84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7"} Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.981904 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:16 crc kubenswrapper[4760]: I0121 16:09:16.982251 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.094610 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167184 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167266 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167500 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.167619 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") pod \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\" (UID: \"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337\") " Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.191126 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84" (OuterVolumeSpecName: "kube-api-access-t5f84") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "kube-api-access-t5f84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.201728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts" (OuterVolumeSpecName: "scripts") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.204531 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.210568 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data" (OuterVolumeSpecName: "config-data") pod "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" (UID: "d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269893 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269944 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5f84\" (UniqueName: \"kubernetes.io/projected/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-kube-api-access-t5f84\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269963 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.269975 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.709206 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nfqg4" event={"ID":"d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337","Type":"ContainerDied","Data":"97db2f0eeee1a5c6ea6021e6e663bd6831e3994573b7078a5fd61116591750c6"} Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.709548 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97db2f0eeee1a5c6ea6021e6e663bd6831e3994573b7078a5fd61116591750c6" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.709407 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nfqg4" Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.911607 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.911905 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" containerID="cri-o://94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" gracePeriod=30 Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.912444 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" containerID="cri-o://832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" gracePeriod=30 Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.925281 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:17 crc kubenswrapper[4760]: I0121 16:09:17.925577 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" containerID="cri-o://9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" gracePeriod=30 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.007031 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.007334 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" containerID="cri-o://6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0" gracePeriod=30 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.007835 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" containerID="cri-o://0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d" gracePeriod=30 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.183260 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.289790 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.289865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.289956 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.290062 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") pod \"41faaec3-50be-468a-b6ea-8967aa8bbe99\" (UID: \"41faaec3-50be-468a-b6ea-8967aa8bbe99\") " Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.296220 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52" (OuterVolumeSpecName: "kube-api-access-vsx52") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "kube-api-access-vsx52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.307543 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts" (OuterVolumeSpecName: "scripts") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.323820 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data" (OuterVolumeSpecName: "config-data") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.323751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41faaec3-50be-468a-b6ea-8967aa8bbe99" (UID: "41faaec3-50be-468a-b6ea-8967aa8bbe99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394241 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsx52\" (UniqueName: \"kubernetes.io/projected/41faaec3-50be-468a-b6ea-8967aa8bbe99-kube-api-access-vsx52\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394396 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394411 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.394422 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41faaec3-50be-468a-b6ea-8967aa8bbe99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.735518 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" event={"ID":"41faaec3-50be-468a-b6ea-8967aa8bbe99","Type":"ContainerDied","Data":"c94932625151bb09239d5c05a255a61766b1996484f31229aafa584fd1d0b8ab"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.735912 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94932625151bb09239d5c05a255a61766b1996484f31229aafa584fd1d0b8ab" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.736006 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jc7z7" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765535 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerID="0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d" exitCode=0 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765568 4760 generic.go:334] "Generic (PLEG): container finished" podID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerID="6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0" exitCode=143 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765625 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerDied","Data":"0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.765657 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerDied","Data":"6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.771948 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" exitCode=143 Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.771985 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerDied","Data":"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d"} Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.808359 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809002 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="init" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809110 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="init" Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809187 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerName="nova-manage" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809245 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerName="nova-manage" Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809302 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809462 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" Jan 21 16:09:18 crc kubenswrapper[4760]: E0121 16:09:18.809526 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerName="nova-cell1-conductor-db-sync" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809596 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerName="nova-cell1-conductor-db-sync" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809851 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ae7881-d794-4020-ae6d-a192927d75c8" containerName="dnsmasq-dns" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809927 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" containerName="nova-manage" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.809986 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" containerName="nova-cell1-conductor-db-sync" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.810663 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.813897 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.832225 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.934073 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvvgp\" (UniqueName: \"kubernetes.io/projected/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-kube-api-access-qvvgp\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.934150 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:18 crc kubenswrapper[4760]: I0121 16:09:18.934274 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.035945 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvvgp\" (UniqueName: \"kubernetes.io/projected/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-kube-api-access-qvvgp\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.036018 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.036141 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.045496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.050963 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.065282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvvgp\" (UniqueName: \"kubernetes.io/projected/5bc3a5b4-ab7d-4215-bd61-ce6c206856ae-kube-api-access-qvvgp\") pod \"nova-cell1-conductor-0\" (UID: \"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae\") " pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.141853 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.157777 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239511 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239621 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239656 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239693 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.239780 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") pod \"bbabd97d-823a-4eb1-93e5-e91589735b4a\" (UID: \"bbabd97d-823a-4eb1-93e5-e91589735b4a\") " Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.240290 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs" (OuterVolumeSpecName: "logs") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.250563 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw" (OuterVolumeSpecName: "kube-api-access-t5wcw") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "kube-api-access-t5wcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.279079 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.280751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data" (OuterVolumeSpecName: "config-data") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.304358 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bbabd97d-823a-4eb1-93e5-e91589735b4a" (UID: "bbabd97d-823a-4eb1-93e5-e91589735b4a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342539 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342571 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5wcw\" (UniqueName: \"kubernetes.io/projected/bbabd97d-823a-4eb1-93e5-e91589735b4a-kube-api-access-t5wcw\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342584 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342592 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbabd97d-823a-4eb1-93e5-e91589735b4a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.342601 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbabd97d-823a-4eb1-93e5-e91589735b4a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:19 crc kubenswrapper[4760]: W0121 16:09:19.645582 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc3a5b4_ab7d_4215_bd61_ce6c206856ae.slice/crio-a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85 WatchSource:0}: Error finding container a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85: Status 404 returned error can't find the container with id a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85 Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.649398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.797651 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae","Type":"ContainerStarted","Data":"a7132271a947bbc77cbac91cd766c8b37e381e3e8604fb01763ab7fa07d63a85"} Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.800951 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bbabd97d-823a-4eb1-93e5-e91589735b4a","Type":"ContainerDied","Data":"74ea022fa8d55d16b391886f8a6ff9acda3aecd426fbca239fb736c3047b9b66"} Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.801056 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.801187 4760 scope.go:117] "RemoveContainer" containerID="0547c56974bf8400b42fe70baa517715d3a544f547decab2e94d6b16cd6fa68d" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.834311 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.841592 4760 scope.go:117] "RemoveContainer" containerID="6bfbbc02544f493f7fcbeda139767036db774d7decbdee854bdfde91790687f0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.847462 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.857849 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: E0121 16:09:19.858441 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858466 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" Jan 21 16:09:19 crc kubenswrapper[4760]: E0121 16:09:19.858480 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858488 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858716 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-log" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.858739 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" containerName="nova-metadata-metadata" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.859989 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.864292 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.864452 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.872984 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.957875 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.957986 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.958096 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.958154 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:19 crc kubenswrapper[4760]: I0121 16:09:19.958243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062508 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062536 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.062590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.064362 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.071521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.071720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.073913 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.086404 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"nova-metadata-0\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.252733 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.730375 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.828612 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerStarted","Data":"1ee015ca957a7beae356061a83ef5c01d4e053e8918062c6fd230aa253aca7d7"} Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.886713 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5bc3a5b4-ab7d-4215-bd61-ce6c206856ae","Type":"ContainerStarted","Data":"8359941b43be865739ef7b61aa74257bbf860c5c2ed8b3d03af612ca15363895"} Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.887453 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:20 crc kubenswrapper[4760]: I0121 16:09:20.929894 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.929871187 podStartE2EDuration="2.929871187s" podCreationTimestamp="2026-01-21 16:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:20.920033684 +0000 UTC m=+1331.587803282" watchObservedRunningTime="2026-01-21 16:09:20.929871187 +0000 UTC m=+1331.597640835" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.552153 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603298 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603408 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.603528 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") pod \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\" (UID: \"e3cb009b-917a-4689-85cc-6d1a4669ebb5\") " Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.604139 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs" (OuterVolumeSpecName: "logs") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.609837 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p" (OuterVolumeSpecName: "kube-api-access-wr48p") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "kube-api-access-wr48p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.633249 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbabd97d-823a-4eb1-93e5-e91589735b4a" path="/var/lib/kubelet/pods/bbabd97d-823a-4eb1-93e5-e91589735b4a/volumes" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.635292 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data" (OuterVolumeSpecName: "config-data") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.638096 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3cb009b-917a-4689-85cc-6d1a4669ebb5" (UID: "e3cb009b-917a-4689-85cc-6d1a4669ebb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706180 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr48p\" (UniqueName: \"kubernetes.io/projected/e3cb009b-917a-4689-85cc-6d1a4669ebb5-kube-api-access-wr48p\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706232 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cb009b-917a-4689-85cc-6d1a4669ebb5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706245 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.706259 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cb009b-917a-4689-85cc-6d1a4669ebb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899591 4760 generic.go:334] "Generic (PLEG): container finished" podID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" exitCode=0 Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899667 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerDied","Data":"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899702 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e3cb009b-917a-4689-85cc-6d1a4669ebb5","Type":"ContainerDied","Data":"20fde2041e60b76fc38f96204c2f0661139392474a32e17d57658b6399adb0ef"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899720 4760 scope.go:117] "RemoveContainer" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.899863 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.910966 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerStarted","Data":"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.911369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerStarted","Data":"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17"} Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.924997 4760 scope.go:117] "RemoveContainer" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.938800 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.938777461 podStartE2EDuration="2.938777461s" podCreationTimestamp="2026-01-21 16:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:21.92942956 +0000 UTC m=+1332.597199138" watchObservedRunningTime="2026-01-21 16:09:21.938777461 +0000 UTC m=+1332.606547039" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.957863 4760 scope.go:117] "RemoveContainer" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.958390 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3\": container with ID starting with 832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3 not found: ID does not exist" containerID="832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.958422 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3"} err="failed to get container status \"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3\": rpc error: code = NotFound desc = could not find container \"832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3\": container with ID starting with 832c7fd8e879b69f95ea4e97d6c0cb8f7909a0d3173fa524839e917689c536b3 not found: ID does not exist" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.958444 4760 scope.go:117] "RemoveContainer" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.958791 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d\": container with ID starting with 94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d not found: ID does not exist" containerID="94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.958890 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d"} err="failed to get container status \"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d\": rpc error: code = NotFound desc = could not find container \"94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d\": container with ID starting with 94ab6d98383e0314fdbefca1dd3ea09eb8e094d0005666507d43c70cd376cc0d not found: ID does not exist" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.964695 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.976477 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.994535 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.995031 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995055 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" Jan 21 16:09:21 crc kubenswrapper[4760]: E0121 16:09:21.995076 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995084 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995350 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-log" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.995379 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" containerName="nova-api-api" Jan 21 16:09:21 crc kubenswrapper[4760]: I0121 16:09:21.996574 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:21.999611 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.007017 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.118618 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.118777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.118851 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.119079 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221186 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221289 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221397 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.221676 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.225902 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.227668 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.245930 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"nova-api-0\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.308003 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.310469 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.322510 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 16:09:22 crc kubenswrapper[4760]: E0121 16:09:22.322633 4760 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.383701 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.859284 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:22 crc kubenswrapper[4760]: W0121 16:09:22.860233 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7bc72f_a8cd_4725_a367_37f13677715c.slice/crio-ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66 WatchSource:0}: Error finding container ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66: Status 404 returned error can't find the container with id ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66 Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.925694 4760 generic.go:334] "Generic (PLEG): container finished" podID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" exitCode=0 Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.925774 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerDied","Data":"9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0"} Jan 21 16:09:22 crc kubenswrapper[4760]: I0121 16:09:22.929732 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerStarted","Data":"ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.222565 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.346986 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") pod \"9c9d0bcf-8f09-4913-855f-f4409d61e726\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.347387 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") pod \"9c9d0bcf-8f09-4913-855f-f4409d61e726\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.347653 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") pod \"9c9d0bcf-8f09-4913-855f-f4409d61e726\" (UID: \"9c9d0bcf-8f09-4913-855f-f4409d61e726\") " Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.354284 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc" (OuterVolumeSpecName: "kube-api-access-cbbqc") pod "9c9d0bcf-8f09-4913-855f-f4409d61e726" (UID: "9c9d0bcf-8f09-4913-855f-f4409d61e726"). InnerVolumeSpecName "kube-api-access-cbbqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.383519 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c9d0bcf-8f09-4913-855f-f4409d61e726" (UID: "9c9d0bcf-8f09-4913-855f-f4409d61e726"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.384053 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data" (OuterVolumeSpecName: "config-data") pod "9c9d0bcf-8f09-4913-855f-f4409d61e726" (UID: "9c9d0bcf-8f09-4913-855f-f4409d61e726"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.451212 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.451252 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbbqc\" (UniqueName: \"kubernetes.io/projected/9c9d0bcf-8f09-4913-855f-f4409d61e726-kube-api-access-cbbqc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.451268 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9d0bcf-8f09-4913-855f-f4409d61e726-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.634210 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3cb009b-917a-4689-85cc-6d1a4669ebb5" path="/var/lib/kubelet/pods/e3cb009b-917a-4689-85cc-6d1a4669ebb5/volumes" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.948698 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9c9d0bcf-8f09-4913-855f-f4409d61e726","Type":"ContainerDied","Data":"0231afbb84218aca62d55a801072377388e58f126925cc7b8176dc2a7b013ec3"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.948771 4760 scope.go:117] "RemoveContainer" containerID="9f707b31ee83273b71392ff2f56827827dfdf06ddcdacb22da57f8cda94d68b0" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.948719 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.952827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerStarted","Data":"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.952881 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerStarted","Data":"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98"} Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.977003 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.988278 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:23 crc kubenswrapper[4760]: I0121 16:09:23.989575 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.989549323 podStartE2EDuration="2.989549323s" podCreationTimestamp="2026-01-21 16:09:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:23.985678718 +0000 UTC m=+1334.653448296" watchObservedRunningTime="2026-01-21 16:09:23.989549323 +0000 UTC m=+1334.657318901" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.012491 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:24 crc kubenswrapper[4760]: E0121 16:09:24.013042 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.013067 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.013301 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" containerName="nova-scheduler-scheduler" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.014253 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.017470 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.037972 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.062064 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.062386 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.062560 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.164770 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.164890 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.164935 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.170710 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.170800 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.186086 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"nova-scheduler-0\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.357592 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.713508 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:09:24 crc kubenswrapper[4760]: I0121 16:09:24.964233 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerStarted","Data":"b817c1aab3260e56cb12d3ceb93b1cd2bd3c33efcad5f9f08aaec8f7eed6bb57"} Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.253490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.253569 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.635537 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9d0bcf-8f09-4913-855f-f4409d61e726" path="/var/lib/kubelet/pods/9c9d0bcf-8f09-4913-855f-f4409d61e726/volumes" Jan 21 16:09:25 crc kubenswrapper[4760]: I0121 16:09:25.977838 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerStarted","Data":"3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567"} Jan 21 16:09:26 crc kubenswrapper[4760]: I0121 16:09:26.004815 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.004790667 podStartE2EDuration="3.004790667s" podCreationTimestamp="2026-01-21 16:09:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:25.995251111 +0000 UTC m=+1336.663020699" watchObservedRunningTime="2026-01-21 16:09:26.004790667 +0000 UTC m=+1336.672560245" Jan 21 16:09:29 crc kubenswrapper[4760]: I0121 16:09:29.185436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 16:09:29 crc kubenswrapper[4760]: I0121 16:09:29.358949 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:09:30 crc kubenswrapper[4760]: I0121 16:09:30.254669 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:09:30 crc kubenswrapper[4760]: I0121 16:09:30.254811 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:09:31 crc kubenswrapper[4760]: I0121 16:09:31.267517 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:31 crc kubenswrapper[4760]: I0121 16:09:31.267565 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:32 crc kubenswrapper[4760]: I0121 16:09:32.384236 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:32 crc kubenswrapper[4760]: I0121 16:09:32.384682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:09:33 crc kubenswrapper[4760]: I0121 16:09:33.425630 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:33 crc kubenswrapper[4760]: I0121 16:09:33.466619 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 16:09:34 crc kubenswrapper[4760]: I0121 16:09:34.358437 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:09:34 crc kubenswrapper[4760]: I0121 16:09:34.393313 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:09:35 crc kubenswrapper[4760]: I0121 16:09:35.157988 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:09:40 crc kubenswrapper[4760]: I0121 16:09:40.260045 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:09:40 crc kubenswrapper[4760]: I0121 16:09:40.265377 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:09:40 crc kubenswrapper[4760]: I0121 16:09:40.266416 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.148008 4760 generic.go:334] "Generic (PLEG): container finished" podID="c896aef3-816a-45b4-80fc-f21db51900ad" containerID="aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8" exitCode=137 Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.148103 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerDied","Data":"aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8"} Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.154222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.565218 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.721688 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") pod \"c896aef3-816a-45b4-80fc-f21db51900ad\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.722219 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") pod \"c896aef3-816a-45b4-80fc-f21db51900ad\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.722291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") pod \"c896aef3-816a-45b4-80fc-f21db51900ad\" (UID: \"c896aef3-816a-45b4-80fc-f21db51900ad\") " Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.727419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww" (OuterVolumeSpecName: "kube-api-access-d5gww") pod "c896aef3-816a-45b4-80fc-f21db51900ad" (UID: "c896aef3-816a-45b4-80fc-f21db51900ad"). InnerVolumeSpecName "kube-api-access-d5gww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.762268 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c896aef3-816a-45b4-80fc-f21db51900ad" (UID: "c896aef3-816a-45b4-80fc-f21db51900ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.763416 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data" (OuterVolumeSpecName: "config-data") pod "c896aef3-816a-45b4-80fc-f21db51900ad" (UID: "c896aef3-816a-45b4-80fc-f21db51900ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.824318 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5gww\" (UniqueName: \"kubernetes.io/projected/c896aef3-816a-45b4-80fc-f21db51900ad-kube-api-access-d5gww\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.824400 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:41 crc kubenswrapper[4760]: I0121 16:09:41.824418 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c896aef3-816a-45b4-80fc-f21db51900ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.159418 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c896aef3-816a-45b4-80fc-f21db51900ad","Type":"ContainerDied","Data":"1ab992d3c51de8ff1d97b84954d6b00bb6cc8f659b4897205f16fa95d66ea9f8"} Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.159468 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.159531 4760 scope.go:117] "RemoveContainer" containerID="aab5a746fdacd0e051a5f0fb8bb2c6015a6440226ea0d3546c742938997503b8" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.195298 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.209934 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.229465 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: E0121 16:09:42.229974 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.229999 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.230272 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.231013 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.238094 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.238383 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.238566 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.239798 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.387996 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.388559 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.388844 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.391739 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.434297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.434953 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.435249 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.435302 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.435422 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztzr\" (UniqueName: \"kubernetes.io/projected/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-kube-api-access-mztzr\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.536973 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537039 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537078 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztzr\" (UniqueName: \"kubernetes.io/projected/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-kube-api-access-mztzr\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537113 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.537172 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.543174 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.543642 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.543964 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.553135 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.556429 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztzr\" (UniqueName: \"kubernetes.io/projected/7a3e9e72-ecf6-406f-ab2b-02804c7f23e5-kube-api-access-mztzr\") pod \"nova-cell1-novncproxy-0\" (UID: \"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:42 crc kubenswrapper[4760]: I0121 16:09:42.570934 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.062138 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.190898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5","Type":"ContainerStarted","Data":"e00db80a0681861b2e891e6b69dfbd1e44b3ea940102de79773e70e5958e87e4"} Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.191724 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.205813 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.423866 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.431035 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.475843 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560690 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560771 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.560905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.561007 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.561176 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.638031 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c896aef3-816a-45b4-80fc-f21db51900ad" path="/var/lib/kubelet/pods/c896aef3-816a-45b4-80fc-f21db51900ad/volumes" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.666010 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.666550 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667224 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667233 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667600 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667631 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.667858 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.668257 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.668656 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.669047 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.669065 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.673207 4760 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod28ae7881-d794-4020-ae6d-a192927d75c8"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod28ae7881-d794-4020-ae6d-a192927d75c8] : Timed out while waiting for systemd to remove kubepods-besteffort-pod28ae7881_d794_4020_ae6d_a192927d75c8.slice" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.689714 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"dnsmasq-dns-cd5cbd7b9-jzlg2\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:43 crc kubenswrapper[4760]: I0121 16:09:43.761942 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:44 crc kubenswrapper[4760]: I0121 16:09:44.235062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7a3e9e72-ecf6-406f-ab2b-02804c7f23e5","Type":"ContainerStarted","Data":"c7093264e694fe132fbf45380fc403b98c2f8b995b195cb2b69c834b5684b5ed"} Jan 21 16:09:44 crc kubenswrapper[4760]: I0121 16:09:44.271817 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.271800221 podStartE2EDuration="2.271800221s" podCreationTimestamp="2026-01-21 16:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:44.269423533 +0000 UTC m=+1354.937193111" watchObservedRunningTime="2026-01-21 16:09:44.271800221 +0000 UTC m=+1354.939569799" Jan 21 16:09:44 crc kubenswrapper[4760]: I0121 16:09:44.306241 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.245247 4760 generic.go:334] "Generic (PLEG): container finished" podID="bddc2f23-658d-41d3-a844-389116907417" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" exitCode=0 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.245401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerDied","Data":"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f"} Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.246066 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerStarted","Data":"2620afc8d15e0a129415e8974b7b98a2159727dd5709fa0bc1367c3ee032a9c6"} Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.870779 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.965768 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966094 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" containerID="cri-o://4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36" gracePeriod=30 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966160 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" containerID="cri-o://dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37" gracePeriod=30 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966121 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" containerID="cri-o://be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777" gracePeriod=30 Jan 21 16:09:45 crc kubenswrapper[4760]: I0121 16:09:45.966203 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" containerID="cri-o://64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771" gracePeriod=30 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.258734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerStarted","Data":"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24"} Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.259111 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262561 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771" exitCode=0 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262592 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777" exitCode=2 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262659 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771"} Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262712 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777"} Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.262767 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" containerID="cri-o://7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" gracePeriod=30 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.263017 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" containerID="cri-o://48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" gracePeriod=30 Jan 21 16:09:46 crc kubenswrapper[4760]: I0121 16:09:46.283033 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" podStartSLOduration=3.283010186 podStartE2EDuration="3.283010186s" podCreationTimestamp="2026-01-21 16:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:46.277750866 +0000 UTC m=+1356.945520454" watchObservedRunningTime="2026-01-21 16:09:46.283010186 +0000 UTC m=+1356.950779764" Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.274482 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36" exitCode=0 Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.274508 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36"} Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.277135 4760 generic.go:334] "Generic (PLEG): container finished" podID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" exitCode=143 Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.277214 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerDied","Data":"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98"} Jan 21 16:09:47 crc kubenswrapper[4760]: I0121 16:09:47.571979 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.296614 4760 generic.go:334] "Generic (PLEG): container finished" podID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerID="dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37" exitCode=0 Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.297092 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37"} Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.469144 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.574951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575302 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575481 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575508 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575674 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.575733 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") pod \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\" (UID: \"37dad9ac-6a5d-42a3-8d27-950f125ba73e\") " Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.576144 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.576354 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.577188 4760 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.577213 4760 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dad9ac-6a5d-42a3-8d27-950f125ba73e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.581533 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh" (OuterVolumeSpecName: "kube-api-access-4k9hh") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "kube-api-access-4k9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.582141 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts" (OuterVolumeSpecName: "scripts") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.611663 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.642936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.666411 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679209 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k9hh\" (UniqueName: \"kubernetes.io/projected/37dad9ac-6a5d-42a3-8d27-950f125ba73e-kube-api-access-4k9hh\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679246 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679257 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679265 4760 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.679273 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.691639 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data" (OuterVolumeSpecName: "config-data") pod "37dad9ac-6a5d-42a3-8d27-950f125ba73e" (UID: "37dad9ac-6a5d-42a3-8d27-950f125ba73e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:48 crc kubenswrapper[4760]: I0121 16:09:48.792621 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dad9ac-6a5d-42a3-8d27-950f125ba73e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.307517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dad9ac-6a5d-42a3-8d27-950f125ba73e","Type":"ContainerDied","Data":"8be89513ab5b47099cb5f4a9b8dfe36c7195061b61e6e75962374d77612636bf"} Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.307590 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.308283 4760 scope.go:117] "RemoveContainer" containerID="64d1b04ee3e6e2ab89c4b8cd93cd15c495a27540aa681776839822fea4e64771" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.332465 4760 scope.go:117] "RemoveContainer" containerID="be55ee2de0ebd58e732e8041653a6c569076e0b5d1115f76b9a39c89e011e777" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.348121 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.359833 4760 scope.go:117] "RemoveContainer" containerID="dad8da53bde86d9756a9332e991bdb9e5dee4591dabb79112383d1fff4d27d37" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.361777 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.377697 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378190 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378206 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378235 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378243 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378257 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378266 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: E0121 16:09:49.378297 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378304 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378526 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="sg-core" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378544 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-central-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378560 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="proxy-httpd" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.378578 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" containerName="ceilometer-notification-agent" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.380606 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.385903 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.386093 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.386172 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.389740 4760 scope.go:117] "RemoveContainer" containerID="4a4ec7c0621ab04bfc1330f49096ad6beaefcc2da4c716e22cd73a17987b5f36" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.410223 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.505718 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.506498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f97b\" (UniqueName: \"kubernetes.io/projected/c3a59982-94c8-461f-99f6-8154ca0666c2-kube-api-access-6f97b\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507004 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507095 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507138 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507174 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507198 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-scripts\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.507243 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-config-data\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608768 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-scripts\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608791 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-config-data\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608885 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.608910 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f97b\" (UniqueName: \"kubernetes.io/projected/c3a59982-94c8-461f-99f6-8154ca0666c2-kube-api-access-6f97b\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.609546 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.609863 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c3a59982-94c8-461f-99f6-8154ca0666c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.616204 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.617299 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-scripts\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.617880 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.617935 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.626392 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f97b\" (UniqueName: \"kubernetes.io/projected/c3a59982-94c8-461f-99f6-8154ca0666c2-kube-api-access-6f97b\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.631181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3a59982-94c8-461f-99f6-8154ca0666c2-config-data\") pod \"ceilometer-0\" (UID: \"c3a59982-94c8-461f-99f6-8154ca0666c2\") " pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.636077 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dad9ac-6a5d-42a3-8d27-950f125ba73e" path="/var/lib/kubelet/pods/37dad9ac-6a5d-42a3-8d27-950f125ba73e/volumes" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.706480 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 16:09:49 crc kubenswrapper[4760]: I0121 16:09:49.875900 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.018703 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.018780 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.018935 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.019012 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") pod \"fb7bc72f-a8cd-4725-a367-37f13677715c\" (UID: \"fb7bc72f-a8cd-4725-a367-37f13677715c\") " Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.019517 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs" (OuterVolumeSpecName: "logs") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.020289 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb7bc72f-a8cd-4725-a367-37f13677715c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.023538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8" (OuterVolumeSpecName: "kube-api-access-dblh8") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "kube-api-access-dblh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.073773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.075419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data" (OuterVolumeSpecName: "config-data") pod "fb7bc72f-a8cd-4725-a367-37f13677715c" (UID: "fb7bc72f-a8cd-4725-a367-37f13677715c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.122529 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.122577 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dblh8\" (UniqueName: \"kubernetes.io/projected/fb7bc72f-a8cd-4725-a367-37f13677715c-kube-api-access-dblh8\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.122591 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7bc72f-a8cd-4725-a367-37f13677715c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:50 crc kubenswrapper[4760]: W0121 16:09:50.268856 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a59982_94c8_461f_99f6_8154ca0666c2.slice/crio-8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278 WatchSource:0}: Error finding container 8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278: Status 404 returned error can't find the container with id 8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278 Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.271796 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.322767 4760 generic.go:334] "Generic (PLEG): container finished" podID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" exitCode=0 Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.322845 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.322861 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerDied","Data":"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba"} Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.323265 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb7bc72f-a8cd-4725-a367-37f13677715c","Type":"ContainerDied","Data":"ffaef25eceb211fc4d500273319b2ba7aa9cc192364bbecc89f8b49d12e6dd66"} Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.323295 4760 scope.go:117] "RemoveContainer" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.324569 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"8d81601c7bc4e166681a5571b4edae699f428afd35a1292101bc1fb0dec7c278"} Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.345632 4760 scope.go:117] "RemoveContainer" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.369509 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.380685 4760 scope.go:117] "RemoveContainer" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.381196 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba\": container with ID starting with 48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba not found: ID does not exist" containerID="48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.381237 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba"} err="failed to get container status \"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba\": rpc error: code = NotFound desc = could not find container \"48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba\": container with ID starting with 48b6e0901adec3653e7e77d5c9db294ccdc5f9df965bed4cfc3e05e12429c0ba not found: ID does not exist" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.381269 4760 scope.go:117] "RemoveContainer" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.381655 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98\": container with ID starting with 7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98 not found: ID does not exist" containerID="7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.381698 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98"} err="failed to get container status \"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98\": rpc error: code = NotFound desc = could not find container \"7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98\": container with ID starting with 7a83acd79227de5dffb38c323a6ccdce2a87379ee330b378e96378fdb13b3d98 not found: ID does not exist" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.382570 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395134 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.395625 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395643 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" Jan 21 16:09:50 crc kubenswrapper[4760]: E0121 16:09:50.395655 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395660 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395834 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-log" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.395846 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" containerName="nova-api-api" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.396853 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.399600 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.400581 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.400685 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.404022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.530446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.530801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.530904 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.531002 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.531116 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.531194 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.632990 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633059 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633089 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633128 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633151 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633239 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.633645 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.641997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.642517 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.642798 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.643039 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.651307 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"nova-api-0\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " pod="openstack/nova-api-0" Jan 21 16:09:50 crc kubenswrapper[4760]: I0121 16:09:50.714130 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.247173 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:09:51 crc kubenswrapper[4760]: W0121 16:09:51.253334 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b3ce13_1f05_40e4_a800_1436993b565e.slice/crio-3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728 WatchSource:0}: Error finding container 3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728: Status 404 returned error can't find the container with id 3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728 Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.337138 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerStarted","Data":"3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728"} Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.339803 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"8bc55c22842cbd29aa8be8d2b61668194ef29cec9e03d7deccbe6b3a2d43d8ba"} Jan 21 16:09:51 crc kubenswrapper[4760]: I0121 16:09:51.638175 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7bc72f-a8cd-4725-a367-37f13677715c" path="/var/lib/kubelet/pods/fb7bc72f-a8cd-4725-a367-37f13677715c/volumes" Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.351425 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"2f7a06618f9b2c2663fd43c27128b1bf23fb06602d38e26efa5080527f934359"} Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.355097 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerStarted","Data":"e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f"} Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.355134 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerStarted","Data":"e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430"} Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.571763 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.591429 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:52 crc kubenswrapper[4760]: I0121 16:09:52.613404 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.613379262 podStartE2EDuration="2.613379262s" podCreationTimestamp="2026-01-21 16:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:52.380418801 +0000 UTC m=+1363.048188399" watchObservedRunningTime="2026-01-21 16:09:52.613379262 +0000 UTC m=+1363.281148850" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.366624 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"5ef83647b056c90f49c759281ef9452526a11bae3702b4c0c4312b87baa83b14"} Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.383141 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.560201 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.561657 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.569061 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.570263 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.582705 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.605839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.607635 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.607667 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.607704 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708568 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708622 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.708917 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.716009 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.717239 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.717955 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.728730 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"nova-cell1-cell-mapping-vdmds\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.770627 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.852028 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.852290 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" containerID="cri-o://1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894" gracePeriod=10 Jan 21 16:09:53 crc kubenswrapper[4760]: I0121 16:09:53.886159 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:09:54 crc kubenswrapper[4760]: E0121 16:09:54.048767 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d894810_0b12_4078_9edb_9b78d95cd5f4.slice/crio-1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.377215 4760 generic.go:334] "Generic (PLEG): container finished" podID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerID="1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894" exitCode=0 Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.377664 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerDied","Data":"1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894"} Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.442013 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:09:54 crc kubenswrapper[4760]: W0121 16:09:54.446756 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcb4a273_5a24_4d7b_b071_53db16ef9f47.slice/crio-4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358 WatchSource:0}: Error finding container 4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358: Status 404 returned error can't find the container with id 4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358 Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.909134 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950292 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950412 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950472 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950543 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.950569 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") pod \"1d894810-0b12-4078-9edb-9b78d95cd5f4\" (UID: \"1d894810-0b12-4078-9edb-9b78d95cd5f4\") " Jan 21 16:09:54 crc kubenswrapper[4760]: I0121 16:09:54.962642 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw" (OuterVolumeSpecName: "kube-api-access-z2plw") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "kube-api-access-z2plw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.050742 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config" (OuterVolumeSpecName: "config") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.056380 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2plw\" (UniqueName: \"kubernetes.io/projected/1d894810-0b12-4078-9edb-9b78d95cd5f4-kube-api-access-z2plw\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.061690 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.108209 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.110857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158373 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158405 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158416 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.158425 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.165316 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d894810-0b12-4078-9edb-9b78d95cd5f4" (UID: "1d894810-0b12-4078-9edb-9b78d95cd5f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.260519 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d894810-0b12-4078-9edb-9b78d95cd5f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.387304 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerStarted","Data":"f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.387369 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerStarted","Data":"4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.391911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" event={"ID":"1d894810-0b12-4078-9edb-9b78d95cd5f4","Type":"ContainerDied","Data":"4668563eeac781e527ddeca3a577821c300ada3336e66292dfe89a3ded8156d3"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.391953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-pvm44" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.391957 4760 scope.go:117] "RemoveContainer" containerID="1db349c07ca2ca94ed200f75476da5a49f2c030ff9295413b8832a8ac97b6894" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.406849 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c3a59982-94c8-461f-99f6-8154ca0666c2","Type":"ContainerStarted","Data":"d4a4a95c23e6abf81aa68690116044e4c3f8ef4c5da4c0e96f5571a3eecbdc0b"} Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.408759 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.421155 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-vdmds" podStartSLOduration=2.4211309180000002 podStartE2EDuration="2.421130918s" podCreationTimestamp="2026-01-21 16:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:09:55.410903222 +0000 UTC m=+1366.078672810" watchObservedRunningTime="2026-01-21 16:09:55.421130918 +0000 UTC m=+1366.088900496" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.439103 4760 scope.go:117] "RemoveContainer" containerID="75eac5402095150f6e05a324475d5637a69a1ae97bf4ba4251a0a215dc883cec" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.452704 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.148477371 podStartE2EDuration="6.45267539s" podCreationTimestamp="2026-01-21 16:09:49 +0000 UTC" firstStartedPulling="2026-01-21 16:09:50.271241351 +0000 UTC m=+1360.939010929" lastFinishedPulling="2026-01-21 16:09:54.57543937 +0000 UTC m=+1365.243208948" observedRunningTime="2026-01-21 16:09:55.444387685 +0000 UTC m=+1366.112157273" watchObservedRunningTime="2026-01-21 16:09:55.45267539 +0000 UTC m=+1366.120444978" Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.489662 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.500780 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-pvm44"] Jan 21 16:09:55 crc kubenswrapper[4760]: I0121 16:09:55.634644 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" path="/var/lib/kubelet/pods/1d894810-0b12-4078-9edb-9b78d95cd5f4/volumes" Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.455037 4760 generic.go:334] "Generic (PLEG): container finished" podID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerID="f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7" exitCode=0 Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.455148 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerDied","Data":"f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7"} Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.715127 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:00 crc kubenswrapper[4760]: I0121 16:10:00.715275 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:01 crc kubenswrapper[4760]: I0121 16:10:01.731511 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:01 crc kubenswrapper[4760]: I0121 16:10:01.732195 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:01 crc kubenswrapper[4760]: I0121 16:10:01.847098 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.005957 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.006074 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.006194 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.006246 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") pod \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\" (UID: \"bcb4a273-5a24-4d7b-b071-53db16ef9f47\") " Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.013298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2" (OuterVolumeSpecName: "kube-api-access-drvr2") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "kube-api-access-drvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.033225 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts" (OuterVolumeSpecName: "scripts") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.050706 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data" (OuterVolumeSpecName: "config-data") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.051635 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcb4a273-5a24-4d7b-b071-53db16ef9f47" (UID: "bcb4a273-5a24-4d7b-b071-53db16ef9f47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110751 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvr2\" (UniqueName: \"kubernetes.io/projected/bcb4a273-5a24-4d7b-b071-53db16ef9f47-kube-api-access-drvr2\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110794 4760 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110808 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.110820 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb4a273-5a24-4d7b-b071-53db16ef9f47-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.485530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-vdmds" event={"ID":"bcb4a273-5a24-4d7b-b071-53db16ef9f47","Type":"ContainerDied","Data":"4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358"} Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.485589 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb4398c279d0aaaf3ee3d1e2210c49647b96d17cf4babb99ff46d61260d3358" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.486497 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-vdmds" Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.696379 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.696703 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" containerID="cri-o://e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.696878 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" containerID="cri-o://e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.708561 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.708821 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" containerID="cri-o://3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.737420 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.738813 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" containerID="cri-o://9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" gracePeriod=30 Jan 21 16:10:02 crc kubenswrapper[4760]: I0121 16:10:02.739317 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" containerID="cri-o://25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" gracePeriod=30 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.498317 4760 generic.go:334] "Generic (PLEG): container finished" podID="12b3ce13-1f05-40e4-a800-1436993b565e" containerID="e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430" exitCode=143 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.498370 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerDied","Data":"e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430"} Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.500666 4760 generic.go:334] "Generic (PLEG): container finished" podID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerID="3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567" exitCode=0 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.500718 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerDied","Data":"3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567"} Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.503481 4760 generic.go:334] "Generic (PLEG): container finished" podID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" exitCode=143 Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.503516 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerDied","Data":"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17"} Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.853473 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.946998 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") pod \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.947076 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") pod \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.947218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") pod \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\" (UID: \"977a4ae3-97df-4bc4-be2d-7cc230908f0c\") " Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.953999 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4" (OuterVolumeSpecName: "kube-api-access-fbmd4") pod "977a4ae3-97df-4bc4-be2d-7cc230908f0c" (UID: "977a4ae3-97df-4bc4-be2d-7cc230908f0c"). InnerVolumeSpecName "kube-api-access-fbmd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.981691 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "977a4ae3-97df-4bc4-be2d-7cc230908f0c" (UID: "977a4ae3-97df-4bc4-be2d-7cc230908f0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:03 crc kubenswrapper[4760]: I0121 16:10:03.982573 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data" (OuterVolumeSpecName: "config-data") pod "977a4ae3-97df-4bc4-be2d-7cc230908f0c" (UID: "977a4ae3-97df-4bc4-be2d-7cc230908f0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.050357 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbmd4\" (UniqueName: \"kubernetes.io/projected/977a4ae3-97df-4bc4-be2d-7cc230908f0c-kube-api-access-fbmd4\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.050414 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.050428 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977a4ae3-97df-4bc4-be2d-7cc230908f0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.513336 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"977a4ae3-97df-4bc4-be2d-7cc230908f0c","Type":"ContainerDied","Data":"b817c1aab3260e56cb12d3ceb93b1cd2bd3c33efcad5f9f08aaec8f7eed6bb57"} Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.513616 4760 scope.go:117] "RemoveContainer" containerID="3ae69e7b998f3c4c12f2876ad704f8fe62fbf6f0bd6ff0ee8a3fb1ae6e455567" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.513376 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.557113 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.571509 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.582664 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584466 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584500 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584534 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerName="nova-manage" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584542 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerName="nova-manage" Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584558 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="init" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584565 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="init" Jan 21 16:10:04 crc kubenswrapper[4760]: E0121 16:10:04.584580 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584587 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584816 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d894810-0b12-4078-9edb-9b78d95cd5f4" containerName="dnsmasq-dns" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584856 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" containerName="nova-scheduler-scheduler" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.584870 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" containerName="nova-manage" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.591788 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.592360 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.594635 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.764756 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-config-data\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.764816 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.764906 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtsd\" (UniqueName: \"kubernetes.io/projected/582a5834-a028-489f-943f-8928d5d9f26c-kube-api-access-vvtsd\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.866280 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-config-data\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.866351 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.866417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtsd\" (UniqueName: \"kubernetes.io/projected/582a5834-a028-489f-943f-8928d5d9f26c-kube-api-access-vvtsd\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.872767 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-config-data\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.872933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582a5834-a028-489f-943f-8928d5d9f26c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.893934 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtsd\" (UniqueName: \"kubernetes.io/projected/582a5834-a028-489f-943f-8928d5d9f26c-kube-api-access-vvtsd\") pod \"nova-scheduler-0\" (UID: \"582a5834-a028-489f-943f-8928d5d9f26c\") " pod="openstack/nova-scheduler-0" Jan 21 16:10:04 crc kubenswrapper[4760]: I0121 16:10:04.919735 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.465524 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.526181 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"582a5834-a028-489f-943f-8928d5d9f26c","Type":"ContainerStarted","Data":"8808a6bddafbac4c3fa4ee12b2258a10fb94a821f86a829771cfc3bc2778b009"} Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.633315 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977a4ae3-97df-4bc4-be2d-7cc230908f0c" path="/var/lib/kubelet/pods/977a4ae3-97df-4bc4-be2d-7cc230908f0c/volumes" Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.870038 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:58812->10.217.0.198:8775: read: connection reset by peer" Jan 21 16:10:05 crc kubenswrapper[4760]: I0121 16:10:05.870052 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:58828->10.217.0.198:8775: read: connection reset by peer" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.370970 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398379 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398465 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.398884 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.399025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") pod \"35c6bade-acb2-42c1-8c99-057c06eb8276\" (UID: \"35c6bade-acb2-42c1-8c99-057c06eb8276\") " Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.399654 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs" (OuterVolumeSpecName: "logs") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.399744 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35c6bade-acb2-42c1-8c99-057c06eb8276-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.418681 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98" (OuterVolumeSpecName: "kube-api-access-4px98") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "kube-api-access-4px98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.463345 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data" (OuterVolumeSpecName: "config-data") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.471194 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.474175 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "35c6bade-acb2-42c1-8c99-057c06eb8276" (UID: "35c6bade-acb2-42c1-8c99-057c06eb8276"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501731 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501780 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501794 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35c6bade-acb2-42c1-8c99-057c06eb8276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.501806 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4px98\" (UniqueName: \"kubernetes.io/projected/35c6bade-acb2-42c1-8c99-057c06eb8276-kube-api-access-4px98\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.552010 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"582a5834-a028-489f-943f-8928d5d9f26c","Type":"ContainerStarted","Data":"41fa322c740f487bfdf46c0c75acac32f9e33c148312136fd6abd6a6cd528033"} Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555701 4760 generic.go:334] "Generic (PLEG): container finished" podID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" exitCode=0 Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555745 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerDied","Data":"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef"} Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555780 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35c6bade-acb2-42c1-8c99-057c06eb8276","Type":"ContainerDied","Data":"1ee015ca957a7beae356061a83ef5c01d4e053e8918062c6fd230aa253aca7d7"} Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555825 4760 scope.go:117] "RemoveContainer" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.555843 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.581107 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.581079422 podStartE2EDuration="2.581079422s" podCreationTimestamp="2026-01-21 16:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:06.572206428 +0000 UTC m=+1377.239976006" watchObservedRunningTime="2026-01-21 16:10:06.581079422 +0000 UTC m=+1377.248849000" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.598430 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.605481 4760 scope.go:117] "RemoveContainer" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.610658 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.635865 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.636358 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636378 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.636395 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636401 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636592 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-metadata" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.636622 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" containerName="nova-metadata-log" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.637652 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.641048 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.641361 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.646493 4760 scope.go:117] "RemoveContainer" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.647695 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef\": container with ID starting with 25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef not found: ID does not exist" containerID="25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.647745 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef"} err="failed to get container status \"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef\": rpc error: code = NotFound desc = could not find container \"25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef\": container with ID starting with 25ba7304db6d7e30d6b0471f6ab7040e11e85f4aa0b9b2c363b83ece5ec361ef not found: ID does not exist" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.647766 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.647789 4760 scope.go:117] "RemoveContainer" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" Jan 21 16:10:06 crc kubenswrapper[4760]: E0121 16:10:06.648691 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17\": container with ID starting with 9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17 not found: ID does not exist" containerID="9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.648723 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17"} err="failed to get container status \"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17\": rpc error: code = NotFound desc = could not find container \"9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17\": container with ID starting with 9608aed18863e1ce943813daf98a4d30895862d8cae4f22dabffae5a54efad17 not found: ID does not exist" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706017 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706202 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-config-data\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706354 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a95e8-224b-406c-b0ad-b184e8bec225-logs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.706390 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ltz\" (UniqueName: \"kubernetes.io/projected/ab3a95e8-224b-406c-b0ad-b184e8bec225-kube-api-access-d2ltz\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.807971 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a95e8-224b-406c-b0ad-b184e8bec225-logs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808070 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ltz\" (UniqueName: \"kubernetes.io/projected/ab3a95e8-224b-406c-b0ad-b184e8bec225-kube-api-access-d2ltz\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808095 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808152 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-config-data\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.808174 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.809445 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3a95e8-224b-406c-b0ad-b184e8bec225-logs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.813233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.815220 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-config-data\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.824700 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab3a95e8-224b-406c-b0ad-b184e8bec225-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.825746 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ltz\" (UniqueName: \"kubernetes.io/projected/ab3a95e8-224b-406c-b0ad-b184e8bec225-kube-api-access-d2ltz\") pod \"nova-metadata-0\" (UID: \"ab3a95e8-224b-406c-b0ad-b184e8bec225\") " pod="openstack/nova-metadata-0" Jan 21 16:10:06 crc kubenswrapper[4760]: I0121 16:10:06.970572 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.519082 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.570648 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3a95e8-224b-406c-b0ad-b184e8bec225","Type":"ContainerStarted","Data":"c8b54855e437c218bd20b4393a9e1c89b43d8fda1bed7f09aa66fb678624c20f"} Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.576753 4760 generic.go:334] "Generic (PLEG): container finished" podID="12b3ce13-1f05-40e4-a800-1436993b565e" containerID="e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f" exitCode=0 Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.577030 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerDied","Data":"e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f"} Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.641982 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c6bade-acb2-42c1-8c99-057c06eb8276" path="/var/lib/kubelet/pods/35c6bade-acb2-42c1-8c99-057c06eb8276/volumes" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.763874 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935104 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935236 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935274 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935443 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935527 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.935624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") pod \"12b3ce13-1f05-40e4-a800-1436993b565e\" (UID: \"12b3ce13-1f05-40e4-a800-1436993b565e\") " Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.941767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs" (OuterVolumeSpecName: "logs") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.950857 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb" (OuterVolumeSpecName: "kube-api-access-75hmb") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "kube-api-access-75hmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.976081 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.978747 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data" (OuterVolumeSpecName: "config-data") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:07 crc kubenswrapper[4760]: I0121 16:10:07.997339 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.009306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "12b3ce13-1f05-40e4-a800-1436993b565e" (UID: "12b3ce13-1f05-40e4-a800-1436993b565e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039196 4760 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b3ce13-1f05-40e4-a800-1436993b565e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039241 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039253 4760 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039263 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039273 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75hmb\" (UniqueName: \"kubernetes.io/projected/12b3ce13-1f05-40e4-a800-1436993b565e-kube-api-access-75hmb\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.039282 4760 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12b3ce13-1f05-40e4-a800-1436993b565e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.599422 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3a95e8-224b-406c-b0ad-b184e8bec225","Type":"ContainerStarted","Data":"785dace11990079a4ed20d73a0766ec7a4c79b0686f0e7887e575acf616b8750"} Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.600141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ab3a95e8-224b-406c-b0ad-b184e8bec225","Type":"ContainerStarted","Data":"2d643e681f317ee8b04a03f82a87505f9bf3a800dc665c7adcd0b4675a577700"} Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.603673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b3ce13-1f05-40e4-a800-1436993b565e","Type":"ContainerDied","Data":"3bf0ad87f28bc3267629d0ca8c5141ae8bbbf1e3002cba6d4242dfc7d2cf2728"} Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.603774 4760 scope.go:117] "RemoveContainer" containerID="e054add364221c34419aa9521f1d811056bcefe3992b6d4c10f32973e0e71b8f" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.603899 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.633219 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.633171765 podStartE2EDuration="2.633171765s" podCreationTimestamp="2026-01-21 16:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:08.624089195 +0000 UTC m=+1379.291858793" watchObservedRunningTime="2026-01-21 16:10:08.633171765 +0000 UTC m=+1379.300941343" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.646704 4760 scope.go:117] "RemoveContainer" containerID="e6c49eb99204207dacfadb23a28f978e095ae29ae3d4782c5bf33a056183b430" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.658269 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.670931 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.694635 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: E0121 16:10:08.695173 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695195 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" Jan 21 16:10:08 crc kubenswrapper[4760]: E0121 16:10:08.695221 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695227 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695424 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-api" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.695452 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" containerName="nova-api-log" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.696512 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.703118 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.706129 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.706315 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.729339 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899407 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5def02-0b1b-4b2e-b03c-028387759ced-logs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899483 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899519 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.899893 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9szz\" (UniqueName: \"kubernetes.io/projected/0d5def02-0b1b-4b2e-b03c-028387759ced-kube-api-access-z9szz\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.900196 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-config-data\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:08 crc kubenswrapper[4760]: I0121 16:10:08.900376 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.002611 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.002700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.002752 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9szz\" (UniqueName: \"kubernetes.io/projected/0d5def02-0b1b-4b2e-b03c-028387759ced-kube-api-access-z9szz\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003052 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-config-data\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003097 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003168 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5def02-0b1b-4b2e-b03c-028387759ced-logs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.003826 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d5def02-0b1b-4b2e-b03c-028387759ced-logs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.011200 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-config-data\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.022425 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.195649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.200862 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5def02-0b1b-4b2e-b03c-028387759ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.201437 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9szz\" (UniqueName: \"kubernetes.io/projected/0d5def02-0b1b-4b2e-b03c-028387759ced-kube-api-access-z9szz\") pod \"nova-api-0\" (UID: \"0d5def02-0b1b-4b2e-b03c-028387759ced\") " pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.326603 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.634680 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b3ce13-1f05-40e4-a800-1436993b565e" path="/var/lib/kubelet/pods/12b3ce13-1f05-40e4-a800-1436993b565e/volumes" Jan 21 16:10:09 crc kubenswrapper[4760]: W0121 16:10:09.795846 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d5def02_0b1b_4b2e_b03c_028387759ced.slice/crio-1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e WatchSource:0}: Error finding container 1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e: Status 404 returned error can't find the container with id 1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.804317 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 16:10:09 crc kubenswrapper[4760]: I0121 16:10:09.920146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 16:10:10 crc kubenswrapper[4760]: I0121 16:10:10.626123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d5def02-0b1b-4b2e-b03c-028387759ced","Type":"ContainerStarted","Data":"c736a9fd8328a984d6056a871f12955bf936a617df238af5fff4a8f9ce27d384"} Jan 21 16:10:10 crc kubenswrapper[4760]: I0121 16:10:10.627401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d5def02-0b1b-4b2e-b03c-028387759ced","Type":"ContainerStarted","Data":"37939a81758f5f63a19d74ad6033deb202b794274413187868fca15cf50e362e"} Jan 21 16:10:10 crc kubenswrapper[4760]: I0121 16:10:10.627494 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d5def02-0b1b-4b2e-b03c-028387759ced","Type":"ContainerStarted","Data":"1d661e5b3c135919085387e682d2a8005f0b33d2b6faf165966731723b21843e"} Jan 21 16:10:11 crc kubenswrapper[4760]: I0121 16:10:11.685505 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.685481478 podStartE2EDuration="3.685481478s" podCreationTimestamp="2026-01-21 16:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:11.677138841 +0000 UTC m=+1382.344908419" watchObservedRunningTime="2026-01-21 16:10:11.685481478 +0000 UTC m=+1382.353251056" Jan 21 16:10:11 crc kubenswrapper[4760]: I0121 16:10:11.971356 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:10:11 crc kubenswrapper[4760]: I0121 16:10:11.971411 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 16:10:14 crc kubenswrapper[4760]: I0121 16:10:14.920608 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 16:10:14 crc kubenswrapper[4760]: I0121 16:10:14.948214 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 16:10:15 crc kubenswrapper[4760]: I0121 16:10:15.711822 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 16:10:16 crc kubenswrapper[4760]: I0121 16:10:16.972199 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:10:16 crc kubenswrapper[4760]: I0121 16:10:16.972276 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 16:10:17 crc kubenswrapper[4760]: I0121 16:10:17.989386 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab3a95e8-224b-406c-b0ad-b184e8bec225" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:17 crc kubenswrapper[4760]: I0121 16:10:17.989797 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ab3a95e8-224b-406c-b0ad-b184e8bec225" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:19 crc kubenswrapper[4760]: I0121 16:10:19.327575 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:19 crc kubenswrapper[4760]: I0121 16:10:19.328492 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 16:10:19 crc kubenswrapper[4760]: I0121 16:10:19.717553 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 16:10:20 crc kubenswrapper[4760]: I0121 16:10:20.344099 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d5def02-0b1b-4b2e-b03c-028387759ced" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:20 crc kubenswrapper[4760]: I0121 16:10:20.344180 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d5def02-0b1b-4b2e-b03c-028387759ced" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.979258 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.980338 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.986310 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:10:26 crc kubenswrapper[4760]: I0121 16:10:26.986398 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.357975 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.358928 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.362222 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.366020 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.864869 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 16:10:29 crc kubenswrapper[4760]: I0121 16:10:29.872711 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 16:10:38 crc kubenswrapper[4760]: I0121 16:10:38.182153 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:39 crc kubenswrapper[4760]: I0121 16:10:39.111755 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:43 crc kubenswrapper[4760]: I0121 16:10:43.035449 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" containerID="cri-o://89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" gracePeriod=604796 Jan 21 16:10:43 crc kubenswrapper[4760]: I0121 16:10:43.957977 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" containerID="cri-o://6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" gracePeriod=604796 Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.285101 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.682235 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773068 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773182 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773289 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773317 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773388 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773441 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773507 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773559 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773651 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.773728 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") pod \"7d829f67-5ff7-4334-bb2d-2767a311159c\" (UID: \"7d829f67-5ff7-4334-bb2d-2767a311159c\") " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.776835 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.777298 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.783529 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.785888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.788361 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql" (OuterVolumeSpecName: "kube-api-access-q9lql") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "kube-api-access-q9lql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.792227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info" (OuterVolumeSpecName: "pod-info") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.800086 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.816133 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.842739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data" (OuterVolumeSpecName: "config-data") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876426 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lql\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-kube-api-access-q9lql\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876462 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876472 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876506 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876517 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7d829f67-5ff7-4334-bb2d-2767a311159c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876525 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876533 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876542 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7d829f67-5ff7-4334-bb2d-2767a311159c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.876551 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.878444 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf" (OuterVolumeSpecName: "server-conf") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.905339 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.933674 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7d829f67-5ff7-4334-bb2d-2767a311159c" (UID: "7d829f67-5ff7-4334-bb2d-2767a311159c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.978831 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.979126 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7d829f67-5ff7-4334-bb2d-2767a311159c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:49 crc kubenswrapper[4760]: I0121 16:10:49.979429 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7d829f67-5ff7-4334-bb2d-2767a311159c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050453 4760 generic.go:334] "Generic (PLEG): container finished" podID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" exitCode=0 Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050531 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050542 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerDied","Data":"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864"} Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7d829f67-5ff7-4334-bb2d-2767a311159c","Type":"ContainerDied","Data":"4b975f80ef2072e1178f421772e768558fc33ff22a27edb1b1fe54f8108c0f70"} Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.050678 4760 scope.go:117] "RemoveContainer" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.081682 4760 scope.go:117] "RemoveContainer" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.102353 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.117377 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.137466 4760 scope.go:117] "RemoveContainer" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.138353 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864\": container with ID starting with 89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864 not found: ID does not exist" containerID="89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.138388 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864"} err="failed to get container status \"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864\": rpc error: code = NotFound desc = could not find container \"89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864\": container with ID starting with 89470c1652101ad07a87ab6b23b09d7df3ba2057edc70fbe168058f62b83e864 not found: ID does not exist" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.138414 4760 scope.go:117] "RemoveContainer" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.139187 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f\": container with ID starting with cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f not found: ID does not exist" containerID="cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.139224 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f"} err="failed to get container status \"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f\": rpc error: code = NotFound desc = could not find container \"cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f\": container with ID starting with cd516e839da52f458c24ab1e3d7dea21bf258da58e9c477318047c2b9eee183f not found: ID does not exist" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.142272 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.143430 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="setup-container" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.143480 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="setup-container" Jan 21 16:10:50 crc kubenswrapper[4760]: E0121 16:10:50.143535 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.143546 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.144008 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.146185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.149673 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.149926 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150164 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150401 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150545 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.150926 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-289fm" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.151662 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.163866 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.286429 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287433 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx88t\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-kube-api-access-bx88t\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287601 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287754 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.287883 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288171 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288258 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288518 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.288619 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390706 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390753 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390852 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390890 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390903 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx88t\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-kube-api-access-bx88t\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.390979 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.391014 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.391043 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.391515 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.392473 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.392718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.393887 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.393903 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-config-data\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.396483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.399181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.407764 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.407961 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.413933 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.430020 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx88t\" (UniqueName: \"kubernetes.io/projected/bf6d5aab-531b-4b6b-94fc-1b386b6b7684-kube-api-access-bx88t\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.464410 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"bf6d5aab-531b-4b6b-94fc-1b386b6b7684\") " pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.513950 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.541115 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699183 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699222 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699356 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699393 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699468 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699596 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699730 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699830 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699871 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.699900 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") pod \"06b9d67d-1790-43ec-8009-91d0cd43e6da\" (UID: \"06b9d67d-1790-43ec-8009-91d0cd43e6da\") " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.701653 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.702134 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.706400 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.706789 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.708767 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info" (OuterVolumeSpecName: "pod-info") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.714850 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.714884 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k" (OuterVolumeSpecName: "kube-api-access-gr48k") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "kube-api-access-gr48k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.717535 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.746806 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data" (OuterVolumeSpecName: "config-data") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.806348 4760 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/06b9d67d-1790-43ec-8009-91d0cd43e6da-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807191 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr48k\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-kube-api-access-gr48k\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807335 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807408 4760 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/06b9d67d-1790-43ec-8009-91d0cd43e6da-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807482 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807577 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807700 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807783 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.807870 4760 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.812877 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf" (OuterVolumeSpecName: "server-conf") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.844572 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.892272 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "06b9d67d-1790-43ec-8009-91d0cd43e6da" (UID: "06b9d67d-1790-43ec-8009-91d0cd43e6da"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.911260 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.911358 4760 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/06b9d67d-1790-43ec-8009-91d0cd43e6da-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.911380 4760 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/06b9d67d-1790-43ec-8009-91d0cd43e6da-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.950072 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:10:50 crc kubenswrapper[4760]: I0121 16:10:50.950164 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079299 4760 generic.go:334] "Generic (PLEG): container finished" podID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" exitCode=0 Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079379 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerDied","Data":"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2"} Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"06b9d67d-1790-43ec-8009-91d0cd43e6da","Type":"ContainerDied","Data":"b14f2e51d8d5e82e725321f229e21a18f7e617652a935f60dfcebde41c79dd68"} Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079446 4760 scope.go:117] "RemoveContainer" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.079636 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.114514 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.138026 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.138252 4760 scope.go:117] "RemoveContainer" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.158369 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.171642 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.172437 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.172464 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.174173 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="setup-container" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.174193 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="setup-container" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.174623 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" containerName="rabbitmq" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.176575 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.183462 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.183739 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.183944 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.184076 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.184228 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.184839 4760 scope.go:117] "RemoveContainer" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.186552 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.186685 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dh775" Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.187237 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2\": container with ID starting with 6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2 not found: ID does not exist" containerID="6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.187284 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2"} err="failed to get container status \"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2\": rpc error: code = NotFound desc = could not find container \"6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2\": container with ID starting with 6942ed082be8bf1424cd7eaa29502c9f4d5dda5f7ad1ce546eb080ece69798c2 not found: ID does not exist" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.188594 4760 scope.go:117] "RemoveContainer" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" Jan 21 16:10:51 crc kubenswrapper[4760]: E0121 16:10:51.189900 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1\": container with ID starting with 7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1 not found: ID does not exist" containerID="7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.189951 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1"} err="failed to get container status \"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1\": rpc error: code = NotFound desc = could not find container \"7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1\": container with ID starting with 7454443e39ed75706600273f4c7074ca9bd87ff352fb2d4323c9eb6e401331e1 not found: ID does not exist" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.192857 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319229 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319361 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3751c728-a57c-483f-847a-b8765d807937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319402 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319442 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd89j\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-kube-api-access-hd89j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319481 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319531 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3751c728-a57c-483f-847a-b8765d807937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319870 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319922 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.319944 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.422211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.422942 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423082 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.422523 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423583 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3751c728-a57c-483f-847a-b8765d807937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423678 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd89j\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-kube-api-access-hd89j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.423872 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424016 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3751c728-a57c-483f-847a-b8765d807937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424051 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424087 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424112 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424249 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424640 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.424646 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.430490 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.434889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.435085 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3751c728-a57c-483f-847a-b8765d807937-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.441017 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.444282 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3751c728-a57c-483f-847a-b8765d807937-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.457551 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd89j\" (UniqueName: \"kubernetes.io/projected/3751c728-a57c-483f-847a-b8765d807937-kube-api-access-hd89j\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.463683 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3751c728-a57c-483f-847a-b8765d807937-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.516884 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3751c728-a57c-483f-847a-b8765d807937\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.634153 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06b9d67d-1790-43ec-8009-91d0cd43e6da" path="/var/lib/kubelet/pods/06b9d67d-1790-43ec-8009-91d0cd43e6da/volumes" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.635119 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" path="/var/lib/kubelet/pods/7d829f67-5ff7-4334-bb2d-2767a311159c/volumes" Jan 21 16:10:51 crc kubenswrapper[4760]: I0121 16:10:51.817186 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:10:52 crc kubenswrapper[4760]: I0121 16:10:52.094716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerStarted","Data":"02114bbeb689b607d9eb06f52d235452aec616146fabecb6b842083362c3fe0a"} Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.015750 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.032839 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.034884 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.038496 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.043641 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.115429 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerStarted","Data":"55638f7fd284ea107dc53866ab65bdd498dffd06c0487c1aaca0f6a62ae66b47"} Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.120750 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerStarted","Data":"04504ba90a9d7fe1b697625160bf9a36566df23176e6f8851eddf6d830acfef9"} Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.163876 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.163969 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.163993 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164057 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164507 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.164694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.266889 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267381 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267416 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267475 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267576 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267644 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.267726 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.268119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.268377 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269311 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269342 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.269957 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.286267 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"dnsmasq-dns-d558885bc-vvctk\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.424952 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:10:53 crc kubenswrapper[4760]: I0121 16:10:53.889150 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:10:54 crc kubenswrapper[4760]: I0121 16:10:54.130177 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerStarted","Data":"1fe40e2a043304dcc7f01a685260f24c84be0f3b76e03ce38b2aba6e1f614c1b"} Jan 21 16:10:54 crc kubenswrapper[4760]: I0121 16:10:54.476280 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7d829f67-5ff7-4334-bb2d-2767a311159c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: i/o timeout" Jan 21 16:10:55 crc kubenswrapper[4760]: I0121 16:10:55.152900 4760 generic.go:334] "Generic (PLEG): container finished" podID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerID="5575a91eb5b49e5c35ee96f8a367ebb1977c62123668b247c0f8e1c49dee3f2b" exitCode=0 Jan 21 16:10:55 crc kubenswrapper[4760]: I0121 16:10:55.153087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerDied","Data":"5575a91eb5b49e5c35ee96f8a367ebb1977c62123668b247c0f8e1c49dee3f2b"} Jan 21 16:10:55 crc kubenswrapper[4760]: I0121 16:10:55.156974 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerStarted","Data":"5fb2534561918f38a27b5a9ca1c8c859b945b0af9e537cb67367f307ab072b13"} Jan 21 16:10:56 crc kubenswrapper[4760]: I0121 16:10:56.234498 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerStarted","Data":"4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160"} Jan 21 16:10:56 crc kubenswrapper[4760]: I0121 16:10:56.235703 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.427670 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.458164 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-vvctk" podStartSLOduration=11.458131417 podStartE2EDuration="11.458131417s" podCreationTimestamp="2026-01-21 16:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:10:56.259536935 +0000 UTC m=+1426.927306533" watchObservedRunningTime="2026-01-21 16:11:03.458131417 +0000 UTC m=+1434.125900985" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.489988 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.490445 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" containerID="cri-o://89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" gracePeriod=10 Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.762045 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-9nlpp"] Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.764866 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.804593 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-9nlpp"] Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812127 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812445 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcz8x\" (UniqueName: \"kubernetes.io/projected/2be85016-adb8-42d1-8b8b-90d92e06edec-kube-api-access-fcz8x\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812630 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812747 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812842 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.812939 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.813086 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-config\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915537 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915598 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915618 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915704 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-config\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.915779 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcz8x\" (UniqueName: \"kubernetes.io/projected/2be85016-adb8-42d1-8b8b-90d92e06edec-kube-api-access-fcz8x\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.917482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.917515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-config\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.917496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.918243 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.918855 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.920300 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2be85016-adb8-42d1-8b8b-90d92e06edec-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:03 crc kubenswrapper[4760]: I0121 16:11:03.949860 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcz8x\" (UniqueName: \"kubernetes.io/projected/2be85016-adb8-42d1-8b8b-90d92e06edec-kube-api-access-fcz8x\") pod \"dnsmasq-dns-78c64bc9c5-9nlpp\" (UID: \"2be85016-adb8-42d1-8b8b-90d92e06edec\") " pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.010545 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.016981 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017051 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017079 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017139 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.017194 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") pod \"bddc2f23-658d-41d3-a844-389116907417\" (UID: \"bddc2f23-658d-41d3-a844-389116907417\") " Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.026657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2" (OuterVolumeSpecName: "kube-api-access-z5mp2") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "kube-api-access-z5mp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.102078 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.107993 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.111771 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.121050 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.121117 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5mp2\" (UniqueName: \"kubernetes.io/projected/bddc2f23-658d-41d3-a844-389116907417-kube-api-access-z5mp2\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.121130 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.125202 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.126559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.134924 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config" (OuterVolumeSpecName: "config") pod "bddc2f23-658d-41d3-a844-389116907417" (UID: "bddc2f23-658d-41d3-a844-389116907417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.226351 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.226664 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.226675 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bddc2f23-658d-41d3-a844-389116907417-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413812 4760 generic.go:334] "Generic (PLEG): container finished" podID="bddc2f23-658d-41d3-a844-389116907417" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" exitCode=0 Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413885 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413888 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerDied","Data":"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24"} Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413940 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" event={"ID":"bddc2f23-658d-41d3-a844-389116907417","Type":"ContainerDied","Data":"2620afc8d15e0a129415e8974b7b98a2159727dd5709fa0bc1367c3ee032a9c6"} Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.413966 4760 scope.go:117] "RemoveContainer" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.446026 4760 scope.go:117] "RemoveContainer" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.457957 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.467811 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-jzlg2"] Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.470776 4760 scope.go:117] "RemoveContainer" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" Jan 21 16:11:04 crc kubenswrapper[4760]: E0121 16:11:04.471591 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24\": container with ID starting with 89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24 not found: ID does not exist" containerID="89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.471674 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24"} err="failed to get container status \"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24\": rpc error: code = NotFound desc = could not find container \"89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24\": container with ID starting with 89c4280d962cbb3e8f6b25bcb5a9e0cb9f66621707491e50a973c694fa73ac24 not found: ID does not exist" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.471725 4760 scope.go:117] "RemoveContainer" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" Jan 21 16:11:04 crc kubenswrapper[4760]: E0121 16:11:04.472182 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f\": container with ID starting with b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f not found: ID does not exist" containerID="b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.472259 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f"} err="failed to get container status \"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f\": rpc error: code = NotFound desc = could not find container \"b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f\": container with ID starting with b4b2646a353e81ac3b5e3c15b4bf608625b2f01c7a18ff2ce0c4a31553df187f not found: ID does not exist" Jan 21 16:11:04 crc kubenswrapper[4760]: I0121 16:11:04.590350 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-9nlpp"] Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.429468 4760 generic.go:334] "Generic (PLEG): container finished" podID="2be85016-adb8-42d1-8b8b-90d92e06edec" containerID="8f28edbf304d509772ca0e28c8423643b84e212b941e12cdf6e2532500f82fdc" exitCode=0 Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.429553 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" event={"ID":"2be85016-adb8-42d1-8b8b-90d92e06edec","Type":"ContainerDied","Data":"8f28edbf304d509772ca0e28c8423643b84e212b941e12cdf6e2532500f82fdc"} Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.430032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" event={"ID":"2be85016-adb8-42d1-8b8b-90d92e06edec","Type":"ContainerStarted","Data":"77d411561e1a616a62012485bf09d1b2d408a271a1b7aeb53cc87de1a6fc05c7"} Jan 21 16:11:05 crc kubenswrapper[4760]: I0121 16:11:05.637508 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddc2f23-658d-41d3-a844-389116907417" path="/var/lib/kubelet/pods/bddc2f23-658d-41d3-a844-389116907417/volumes" Jan 21 16:11:06 crc kubenswrapper[4760]: I0121 16:11:06.442024 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" event={"ID":"2be85016-adb8-42d1-8b8b-90d92e06edec","Type":"ContainerStarted","Data":"16da075b0a06afc85f3511d42e71ce5c7fbb7a18ca497ba3c07f8d302cb948ea"} Jan 21 16:11:06 crc kubenswrapper[4760]: I0121 16:11:06.442613 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:06 crc kubenswrapper[4760]: I0121 16:11:06.463953 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" podStartSLOduration=3.463932103 podStartE2EDuration="3.463932103s" podCreationTimestamp="2026-01-21 16:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:06.463793999 +0000 UTC m=+1437.131563587" watchObservedRunningTime="2026-01-21 16:11:06.463932103 +0000 UTC m=+1437.131701691" Jan 21 16:11:08 crc kubenswrapper[4760]: I0121 16:11:08.762826 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cd5cbd7b9-jzlg2" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.105165 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-9nlpp" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.181351 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.182301 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-vvctk" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" containerID="cri-o://4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160" gracePeriod=10 Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.526412 4760 generic.go:334] "Generic (PLEG): container finished" podID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerID="4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160" exitCode=0 Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.526465 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerDied","Data":"4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160"} Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.668705 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.784928 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785613 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785778 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785825 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.785861 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") pod \"d9eacc7b-4ed9-4c85-b348-13155546eae1\" (UID: \"d9eacc7b-4ed9-4c85-b348-13155546eae1\") " Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.794814 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr" (OuterVolumeSpecName: "kube-api-access-zbqfr") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "kube-api-access-zbqfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.872002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.874736 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config" (OuterVolumeSpecName: "config") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.877876 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.878227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.883420 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888118 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9eacc7b-4ed9-4c85-b348-13155546eae1" (UID: "d9eacc7b-4ed9-4c85-b348-13155546eae1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888446 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888480 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbqfr\" (UniqueName: \"kubernetes.io/projected/d9eacc7b-4ed9-4c85-b348-13155546eae1-kube-api-access-zbqfr\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888494 4760 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888503 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888512 4760 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888521 4760 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:14 crc kubenswrapper[4760]: I0121 16:11:14.888529 4760 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9eacc7b-4ed9-4c85-b348-13155546eae1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.539735 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vvctk" event={"ID":"d9eacc7b-4ed9-4c85-b348-13155546eae1","Type":"ContainerDied","Data":"1fe40e2a043304dcc7f01a685260f24c84be0f3b76e03ce38b2aba6e1f614c1b"} Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.539812 4760 scope.go:117] "RemoveContainer" containerID="4559bd863c88826aa4b83b9e4b6616d155e234adc22ca4eea35b64fa9ecb1160" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.539819 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vvctk" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.571983 4760 scope.go:117] "RemoveContainer" containerID="5575a91eb5b49e5c35ee96f8a367ebb1977c62123668b247c0f8e1c49dee3f2b" Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.582590 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.596589 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vvctk"] Jan 21 16:11:15 crc kubenswrapper[4760]: I0121 16:11:15.635006 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" path="/var/lib/kubelet/pods/d9eacc7b-4ed9-4c85-b348-13155546eae1/volumes" Jan 21 16:11:20 crc kubenswrapper[4760]: I0121 16:11:20.946588 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:11:20 crc kubenswrapper[4760]: I0121 16:11:20.947394 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:11:25 crc kubenswrapper[4760]: I0121 16:11:25.637554 4760 generic.go:334] "Generic (PLEG): container finished" podID="bf6d5aab-531b-4b6b-94fc-1b386b6b7684" containerID="55638f7fd284ea107dc53866ab65bdd498dffd06c0487c1aaca0f6a62ae66b47" exitCode=0 Jan 21 16:11:25 crc kubenswrapper[4760]: I0121 16:11:25.637812 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerDied","Data":"55638f7fd284ea107dc53866ab65bdd498dffd06c0487c1aaca0f6a62ae66b47"} Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.657792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bf6d5aab-531b-4b6b-94fc-1b386b6b7684","Type":"ContainerStarted","Data":"1e32f3702c311d03f14a4b6611c61fcc68eaac835391147e65b945c5e8d5aad8"} Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.660236 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.733831 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.733806098 podStartE2EDuration="36.733806098s" podCreationTimestamp="2026-01-21 16:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:26.722342493 +0000 UTC m=+1457.390112091" watchObservedRunningTime="2026-01-21 16:11:26.733806098 +0000 UTC m=+1457.401575676" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.804087 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg"] Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.804913 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.804946 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.804987 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805001 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.805026 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805037 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="init" Jan 21 16:11:26 crc kubenswrapper[4760]: E0121 16:11:26.805052 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805062 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805473 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddc2f23-658d-41d3-a844-389116907417" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.805503 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9eacc7b-4ed9-4c85-b348-13155546eae1" containerName="dnsmasq-dns" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.806684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.813995 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg"] Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.902965 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.903447 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.903677 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:26 crc kubenswrapper[4760]: I0121 16:11:26.904014 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001155 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001298 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001435 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.001461 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.103880 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.103960 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.104249 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.104758 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.132482 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.137414 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.137951 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.140140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.215393 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.673604 4760 generic.go:334] "Generic (PLEG): container finished" podID="3751c728-a57c-483f-847a-b8765d807937" containerID="5fb2534561918f38a27b5a9ca1c8c859b945b0af9e537cb67367f307ab072b13" exitCode=0 Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.675510 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerDied","Data":"5fb2534561918f38a27b5a9ca1c8c859b945b0af9e537cb67367f307ab072b13"} Jan 21 16:11:27 crc kubenswrapper[4760]: I0121 16:11:27.679942 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg"] Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.688938 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerStarted","Data":"f04a18d64e84e05438bf5dcec4f962ae41fe45a67b18ac3fe4b3875abd075489"} Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.692586 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3751c728-a57c-483f-847a-b8765d807937","Type":"ContainerStarted","Data":"9290f4721fc0f248e2a3e62bed760808d05421a41d06447a494beab4b76461f5"} Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.693078 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:11:28 crc kubenswrapper[4760]: I0121 16:11:28.730964 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.730933569 podStartE2EDuration="37.730933569s" podCreationTimestamp="2026-01-21 16:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:11:28.717594826 +0000 UTC m=+1459.385364414" watchObservedRunningTime="2026-01-21 16:11:28.730933569 +0000 UTC m=+1459.398703147" Jan 21 16:11:39 crc kubenswrapper[4760]: I0121 16:11:39.970249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerStarted","Data":"d42b810dea915b39d196b13f01268f96311ee9371f8c876ac5c3cb2ea7e955e3"} Jan 21 16:11:39 crc kubenswrapper[4760]: I0121 16:11:39.973537 4760 scope.go:117] "RemoveContainer" containerID="7045c13b067ecb62baec2b3a1ce9d171e656d7b9302065660c9eb374edf7463c" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.006971 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" podStartSLOduration=2.4557914419999998 podStartE2EDuration="14.00678873s" podCreationTimestamp="2026-01-21 16:11:26 +0000 UTC" firstStartedPulling="2026-01-21 16:11:27.703600983 +0000 UTC m=+1458.371370561" lastFinishedPulling="2026-01-21 16:11:39.254598271 +0000 UTC m=+1469.922367849" observedRunningTime="2026-01-21 16:11:39.990096099 +0000 UTC m=+1470.657865677" watchObservedRunningTime="2026-01-21 16:11:40.00678873 +0000 UTC m=+1470.674558318" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.013784 4760 scope.go:117] "RemoveContainer" containerID="34038f4c7fac9f938c55ed43e5c32a1fe9257ccbfb52b4dbf532309cae01868b" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.066085 4760 scope.go:117] "RemoveContainer" containerID="2d34bfbb1e9562044a28e1b8f99e51d17272859240cd9c059be93073a5a4cbd7" Jan 21 16:11:40 crc kubenswrapper[4760]: I0121 16:11:40.519490 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 16:11:41 crc kubenswrapper[4760]: I0121 16:11:41.822571 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.946133 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.946723 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.946778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.947665 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:11:50 crc kubenswrapper[4760]: I0121 16:11:50.948106 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41" gracePeriod=600 Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.093052 4760 generic.go:334] "Generic (PLEG): container finished" podID="c223d637-a759-4b7a-9eca-d4aa22707301" containerID="d42b810dea915b39d196b13f01268f96311ee9371f8c876ac5c3cb2ea7e955e3" exitCode=0 Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.093140 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerDied","Data":"d42b810dea915b39d196b13f01268f96311ee9371f8c876ac5c3cb2ea7e955e3"} Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.106612 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41" exitCode=0 Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.106668 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41"} Jan 21 16:11:51 crc kubenswrapper[4760]: I0121 16:11:51.106963 4760 scope.go:117] "RemoveContainer" containerID="da9375843cc51770b9bc9917868815839c55dd5f95c6dfc9b5903ba3c26e61df" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.117006 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965"} Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.539388 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.720800 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.721160 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.721271 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.721397 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") pod \"c223d637-a759-4b7a-9eca-d4aa22707301\" (UID: \"c223d637-a759-4b7a-9eca-d4aa22707301\") " Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.728226 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b" (OuterVolumeSpecName: "kube-api-access-57f2b") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "kube-api-access-57f2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.728528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.752728 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory" (OuterVolumeSpecName: "inventory") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.753541 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c223d637-a759-4b7a-9eca-d4aa22707301" (UID: "c223d637-a759-4b7a-9eca-d4aa22707301"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.823982 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.824291 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57f2b\" (UniqueName: \"kubernetes.io/projected/c223d637-a759-4b7a-9eca-d4aa22707301-kube-api-access-57f2b\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.824301 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:52 crc kubenswrapper[4760]: I0121 16:11:52.824311 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c223d637-a759-4b7a-9eca-d4aa22707301-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.160124 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" event={"ID":"c223d637-a759-4b7a-9eca-d4aa22707301","Type":"ContainerDied","Data":"f04a18d64e84e05438bf5dcec4f962ae41fe45a67b18ac3fe4b3875abd075489"} Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.160179 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.160195 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04a18d64e84e05438bf5dcec4f962ae41fe45a67b18ac3fe4b3875abd075489" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.622537 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42"] Jan 21 16:11:53 crc kubenswrapper[4760]: E0121 16:11:53.623255 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c223d637-a759-4b7a-9eca-d4aa22707301" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.623287 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c223d637-a759-4b7a-9eca-d4aa22707301" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.623622 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c223d637-a759-4b7a-9eca-d4aa22707301" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.625624 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.628376 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.630430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.630530 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.630723 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.639803 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42"] Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.742013 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.742415 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.742548 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.844940 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.844995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.845450 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.851352 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.858931 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.876694 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8jj42\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:53 crc kubenswrapper[4760]: I0121 16:11:53.951584 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:54 crc kubenswrapper[4760]: I0121 16:11:54.476351 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42"] Jan 21 16:11:55 crc kubenswrapper[4760]: I0121 16:11:55.182077 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerStarted","Data":"898d708c0e7eb89a301a0f165808e91cffbb37e090a7656eadf1fa1f252d7c1a"} Jan 21 16:11:55 crc kubenswrapper[4760]: I0121 16:11:55.182456 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerStarted","Data":"cfb5e69134d5fc8caa17dfd75467eb74b9992a522c2f3316851201583e318165"} Jan 21 16:11:55 crc kubenswrapper[4760]: I0121 16:11:55.201084 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" podStartSLOduration=1.804292655 podStartE2EDuration="2.201058226s" podCreationTimestamp="2026-01-21 16:11:53 +0000 UTC" firstStartedPulling="2026-01-21 16:11:54.498508477 +0000 UTC m=+1485.166278055" lastFinishedPulling="2026-01-21 16:11:54.895274048 +0000 UTC m=+1485.563043626" observedRunningTime="2026-01-21 16:11:55.196748265 +0000 UTC m=+1485.864517843" watchObservedRunningTime="2026-01-21 16:11:55.201058226 +0000 UTC m=+1485.868827804" Jan 21 16:11:58 crc kubenswrapper[4760]: I0121 16:11:58.211000 4760 generic.go:334] "Generic (PLEG): container finished" podID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerID="898d708c0e7eb89a301a0f165808e91cffbb37e090a7656eadf1fa1f252d7c1a" exitCode=0 Jan 21 16:11:58 crc kubenswrapper[4760]: I0121 16:11:58.211117 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerDied","Data":"898d708c0e7eb89a301a0f165808e91cffbb37e090a7656eadf1fa1f252d7c1a"} Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.716752 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.893641 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") pod \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.893844 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") pod \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.893903 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") pod \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\" (UID: \"07be8207-721d-4d0a-bada-ac8b6c54c3ce\") " Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.902584 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp" (OuterVolumeSpecName: "kube-api-access-s96fp") pod "07be8207-721d-4d0a-bada-ac8b6c54c3ce" (UID: "07be8207-721d-4d0a-bada-ac8b6c54c3ce"). InnerVolumeSpecName "kube-api-access-s96fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.934390 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory" (OuterVolumeSpecName: "inventory") pod "07be8207-721d-4d0a-bada-ac8b6c54c3ce" (UID: "07be8207-721d-4d0a-bada-ac8b6c54c3ce"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.935748 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07be8207-721d-4d0a-bada-ac8b6c54c3ce" (UID: "07be8207-721d-4d0a-bada-ac8b6c54c3ce"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.996667 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.996858 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s96fp\" (UniqueName: \"kubernetes.io/projected/07be8207-721d-4d0a-bada-ac8b6c54c3ce-kube-api-access-s96fp\") on node \"crc\" DevicePath \"\"" Jan 21 16:11:59 crc kubenswrapper[4760]: I0121 16:11:59.996948 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07be8207-721d-4d0a-bada-ac8b6c54c3ce-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.241783 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" event={"ID":"07be8207-721d-4d0a-bada-ac8b6c54c3ce","Type":"ContainerDied","Data":"cfb5e69134d5fc8caa17dfd75467eb74b9992a522c2f3316851201583e318165"} Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.241844 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb5e69134d5fc8caa17dfd75467eb74b9992a522c2f3316851201583e318165" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.241985 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8jj42" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.321113 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f"] Jan 21 16:12:00 crc kubenswrapper[4760]: E0121 16:12:00.321522 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.321539 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.321727 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="07be8207-721d-4d0a-bada-ac8b6c54c3ce" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.322415 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.324715 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.324798 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.325632 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.326388 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.331889 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f"] Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.507492 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.508505 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.508669 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.508833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611075 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611184 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611273 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.611317 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.617726 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.619599 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.632598 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.633954 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:00 crc kubenswrapper[4760]: I0121 16:12:00.653138 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:12:01 crc kubenswrapper[4760]: I0121 16:12:01.260477 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f"] Jan 21 16:12:02 crc kubenswrapper[4760]: I0121 16:12:02.275864 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerStarted","Data":"defdf7fa7292a1d9855424da069fd6a8bf3368105d1a5f8798dea777f52df2a8"} Jan 21 16:12:02 crc kubenswrapper[4760]: I0121 16:12:02.276909 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerStarted","Data":"eec29a2bedebeb1a33619871536ab8542a1b2f6f9a986c85306788a51f25389a"} Jan 21 16:12:02 crc kubenswrapper[4760]: I0121 16:12:02.305341 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" podStartSLOduration=1.819095758 podStartE2EDuration="2.305297405s" podCreationTimestamp="2026-01-21 16:12:00 +0000 UTC" firstStartedPulling="2026-01-21 16:12:01.263157299 +0000 UTC m=+1491.930926877" lastFinishedPulling="2026-01-21 16:12:01.749358946 +0000 UTC m=+1492.417128524" observedRunningTime="2026-01-21 16:12:02.29583227 +0000 UTC m=+1492.963601848" watchObservedRunningTime="2026-01-21 16:12:02.305297405 +0000 UTC m=+1492.973066983" Jan 21 16:12:40 crc kubenswrapper[4760]: I0121 16:12:40.261640 4760 scope.go:117] "RemoveContainer" containerID="7eb018cf6bcb54d596b8acbb0255bd775c4f2cb81165dc1d895a7dd61789b94b" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.345744 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.351008 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.363513 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.443551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.443874 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.444066 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.546562 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.546606 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.546656 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.547366 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.547370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.577301 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"redhat-operators-664fs\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:17 crc kubenswrapper[4760]: I0121 16:13:17.676697 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:18 crc kubenswrapper[4760]: I0121 16:13:18.158618 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.112956 4760 generic.go:334] "Generic (PLEG): container finished" podID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerID="426c0c59a999b5bd2ae19339a27fe525b6c94ec350e061e1f7dafdee3a114a4b" exitCode=0 Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.113034 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"426c0c59a999b5bd2ae19339a27fe525b6c94ec350e061e1f7dafdee3a114a4b"} Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.113280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerStarted","Data":"03325803c5c412751ccb900afe03f0aed9fe359248ea2112592275e784b0822d"} Jan 21 16:13:19 crc kubenswrapper[4760]: I0121 16:13:19.114929 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:13:21 crc kubenswrapper[4760]: I0121 16:13:21.133261 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerStarted","Data":"25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b"} Jan 21 16:13:23 crc kubenswrapper[4760]: I0121 16:13:23.156890 4760 generic.go:334] "Generic (PLEG): container finished" podID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerID="25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b" exitCode=0 Jan 21 16:13:23 crc kubenswrapper[4760]: I0121 16:13:23.156977 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b"} Jan 21 16:13:24 crc kubenswrapper[4760]: I0121 16:13:24.170036 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerStarted","Data":"eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017"} Jan 21 16:13:24 crc kubenswrapper[4760]: I0121 16:13:24.215137 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-664fs" podStartSLOduration=2.394530875 podStartE2EDuration="7.215110567s" podCreationTimestamp="2026-01-21 16:13:17 +0000 UTC" firstStartedPulling="2026-01-21 16:13:19.114510988 +0000 UTC m=+1569.782280566" lastFinishedPulling="2026-01-21 16:13:23.93509068 +0000 UTC m=+1574.602860258" observedRunningTime="2026-01-21 16:13:24.20528365 +0000 UTC m=+1574.873053248" watchObservedRunningTime="2026-01-21 16:13:24.215110567 +0000 UTC m=+1574.882880145" Jan 21 16:13:27 crc kubenswrapper[4760]: I0121 16:13:27.677620 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:27 crc kubenswrapper[4760]: I0121 16:13:27.679293 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:28 crc kubenswrapper[4760]: I0121 16:13:28.728026 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-664fs" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" probeResult="failure" output=< Jan 21 16:13:28 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:13:28 crc kubenswrapper[4760]: > Jan 21 16:13:37 crc kubenswrapper[4760]: I0121 16:13:37.733520 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:37 crc kubenswrapper[4760]: I0121 16:13:37.782555 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:37 crc kubenswrapper[4760]: I0121 16:13:37.982425 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:39 crc kubenswrapper[4760]: I0121 16:13:39.302277 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-664fs" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" containerID="cri-o://eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017" gracePeriod=2 Jan 21 16:13:39 crc kubenswrapper[4760]: E0121 16:13:39.692218 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff420ce7_afc0_42f7_bdcd_9c06187dfbee.slice/crio-conmon-eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318482 4760 generic.go:334] "Generic (PLEG): container finished" podID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerID="eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017" exitCode=0 Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318594 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017"} Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318865 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-664fs" event={"ID":"ff420ce7-afc0-42f7-bdcd-9c06187dfbee","Type":"ContainerDied","Data":"03325803c5c412751ccb900afe03f0aed9fe359248ea2112592275e784b0822d"} Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.318918 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03325803c5c412751ccb900afe03f0aed9fe359248ea2112592275e784b0822d" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.335659 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.448124 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") pod \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.448218 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") pod \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.449271 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") pod \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\" (UID: \"ff420ce7-afc0-42f7-bdcd-9c06187dfbee\") " Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.450309 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities" (OuterVolumeSpecName: "utilities") pod "ff420ce7-afc0-42f7-bdcd-9c06187dfbee" (UID: "ff420ce7-afc0-42f7-bdcd-9c06187dfbee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.451318 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.454551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c" (OuterVolumeSpecName: "kube-api-access-7gh6c") pod "ff420ce7-afc0-42f7-bdcd-9c06187dfbee" (UID: "ff420ce7-afc0-42f7-bdcd-9c06187dfbee"). InnerVolumeSpecName "kube-api-access-7gh6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.554353 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gh6c\" (UniqueName: \"kubernetes.io/projected/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-kube-api-access-7gh6c\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.568626 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff420ce7-afc0-42f7-bdcd-9c06187dfbee" (UID: "ff420ce7-afc0-42f7-bdcd-9c06187dfbee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:40 crc kubenswrapper[4760]: I0121 16:13:40.656888 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff420ce7-afc0-42f7-bdcd-9c06187dfbee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.330525 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-664fs" Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.377654 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.386299 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-664fs"] Jan 21 16:13:41 crc kubenswrapper[4760]: I0121 16:13:41.633785 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" path="/var/lib/kubelet/pods/ff420ce7-afc0-42f7-bdcd-9c06187dfbee/volumes" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.456680 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:43 crc kubenswrapper[4760]: E0121 16:13:43.457963 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-content" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.457982 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-content" Jan 21 16:13:43 crc kubenswrapper[4760]: E0121 16:13:43.457999 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.458006 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" Jan 21 16:13:43 crc kubenswrapper[4760]: E0121 16:13:43.458017 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-utilities" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.458025 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="extract-utilities" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.458290 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff420ce7-afc0-42f7-bdcd-9c06187dfbee" containerName="registry-server" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.460670 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.492688 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.517748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.517805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.517908 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.619375 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.619461 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.619590 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.620360 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.620344 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.644194 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"community-operators-d74l4\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:43 crc kubenswrapper[4760]: I0121 16:13:43.786165 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:44 crc kubenswrapper[4760]: I0121 16:13:44.396137 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:45 crc kubenswrapper[4760]: I0121 16:13:45.370396 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" exitCode=0 Jan 21 16:13:45 crc kubenswrapper[4760]: I0121 16:13:45.370453 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef"} Jan 21 16:13:45 crc kubenswrapper[4760]: I0121 16:13:45.370488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerStarted","Data":"e3037c9f30dc7a519c78dcb348d7864586f91e911556c283bb5a05a952293ed2"} Jan 21 16:13:47 crc kubenswrapper[4760]: I0121 16:13:47.393900 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" exitCode=0 Jan 21 16:13:47 crc kubenswrapper[4760]: I0121 16:13:47.394000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3"} Jan 21 16:13:48 crc kubenswrapper[4760]: I0121 16:13:48.406284 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerStarted","Data":"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae"} Jan 21 16:13:48 crc kubenswrapper[4760]: I0121 16:13:48.480106 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d74l4" podStartSLOduration=2.719480504 podStartE2EDuration="5.480077602s" podCreationTimestamp="2026-01-21 16:13:43 +0000 UTC" firstStartedPulling="2026-01-21 16:13:45.37249731 +0000 UTC m=+1596.040266888" lastFinishedPulling="2026-01-21 16:13:48.133094408 +0000 UTC m=+1598.800863986" observedRunningTime="2026-01-21 16:13:48.471406604 +0000 UTC m=+1599.139176222" watchObservedRunningTime="2026-01-21 16:13:48.480077602 +0000 UTC m=+1599.147847180" Jan 21 16:13:53 crc kubenswrapper[4760]: I0121 16:13:53.786390 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:53 crc kubenswrapper[4760]: I0121 16:13:53.786772 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:53 crc kubenswrapper[4760]: I0121 16:13:53.831851 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:54 crc kubenswrapper[4760]: I0121 16:13:54.505038 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:54 crc kubenswrapper[4760]: I0121 16:13:54.566229 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:56 crc kubenswrapper[4760]: I0121 16:13:56.474318 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d74l4" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" containerID="cri-o://af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" gracePeriod=2 Jan 21 16:13:56 crc kubenswrapper[4760]: I0121 16:13:56.967271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.069113 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") pod \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.069265 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") pod \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.069557 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") pod \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\" (UID: \"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2\") " Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.070416 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities" (OuterVolumeSpecName: "utilities") pod "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" (UID: "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.075233 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr" (OuterVolumeSpecName: "kube-api-access-zggvr") pod "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" (UID: "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2"). InnerVolumeSpecName "kube-api-access-zggvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.127915 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" (UID: "4e0dbc9f-a555-4994-b9ff-3bb1332d84f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.171113 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.171153 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.171188 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zggvr\" (UniqueName: \"kubernetes.io/projected/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2-kube-api-access-zggvr\") on node \"crc\" DevicePath \"\"" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486118 4760 generic.go:334] "Generic (PLEG): container finished" podID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" exitCode=0 Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486214 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d74l4" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae"} Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486311 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d74l4" event={"ID":"4e0dbc9f-a555-4994-b9ff-3bb1332d84f2","Type":"ContainerDied","Data":"e3037c9f30dc7a519c78dcb348d7864586f91e911556c283bb5a05a952293ed2"} Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.486354 4760 scope.go:117] "RemoveContainer" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.509222 4760 scope.go:117] "RemoveContainer" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.524661 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.532794 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d74l4"] Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.551394 4760 scope.go:117] "RemoveContainer" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.583305 4760 scope.go:117] "RemoveContainer" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" Jan 21 16:13:57 crc kubenswrapper[4760]: E0121 16:13:57.584032 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae\": container with ID starting with af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae not found: ID does not exist" containerID="af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584070 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae"} err="failed to get container status \"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae\": rpc error: code = NotFound desc = could not find container \"af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae\": container with ID starting with af3f415a80f1228a6a6771b9fdc716b19bf1c7c54c3daeceb509b77311811cae not found: ID does not exist" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584108 4760 scope.go:117] "RemoveContainer" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" Jan 21 16:13:57 crc kubenswrapper[4760]: E0121 16:13:57.584481 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3\": container with ID starting with 570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3 not found: ID does not exist" containerID="570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584509 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3"} err="failed to get container status \"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3\": rpc error: code = NotFound desc = could not find container \"570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3\": container with ID starting with 570879ce8c89d215389820301894ea2b6113c66c15d8940ce20aa19f44626dc3 not found: ID does not exist" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584526 4760 scope.go:117] "RemoveContainer" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" Jan 21 16:13:57 crc kubenswrapper[4760]: E0121 16:13:57.584752 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef\": container with ID starting with 91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef not found: ID does not exist" containerID="91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.584775 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef"} err="failed to get container status \"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef\": rpc error: code = NotFound desc = could not find container \"91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef\": container with ID starting with 91b8e48e105410970f9b11b1797a78d8ef084d8de4edb706977ddc62b7400eef not found: ID does not exist" Jan 21 16:13:57 crc kubenswrapper[4760]: I0121 16:13:57.636199 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" path="/var/lib/kubelet/pods/4e0dbc9f-a555-4994-b9ff-3bb1332d84f2/volumes" Jan 21 16:14:20 crc kubenswrapper[4760]: I0121 16:14:20.946767 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:14:20 crc kubenswrapper[4760]: I0121 16:14:20.947379 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.788779 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:36 crc kubenswrapper[4760]: E0121 16:14:36.789842 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-content" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.789860 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-content" Jan 21 16:14:36 crc kubenswrapper[4760]: E0121 16:14:36.789893 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-utilities" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.789905 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="extract-utilities" Jan 21 16:14:36 crc kubenswrapper[4760]: E0121 16:14:36.789935 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.789943 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.790190 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0dbc9f-a555-4994-b9ff-3bb1332d84f2" containerName="registry-server" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.791940 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.829769 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.859865 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.860212 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.860634 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.961410 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.961822 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.961911 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.962053 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.962351 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:36 crc kubenswrapper[4760]: I0121 16:14:36.986493 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"certified-operators-5sm8l\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.130694 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.511988 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.889562 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerID="0194dd68535f9956917c79dbeb858d8724b0d065e01f1d813aed053dc89abfe8" exitCode=0 Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.889673 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"0194dd68535f9956917c79dbeb858d8724b0d065e01f1d813aed053dc89abfe8"} Jan 21 16:14:37 crc kubenswrapper[4760]: I0121 16:14:37.889912 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerStarted","Data":"8a3703464c0489d680938da9d2148fbab1ba5833e7758cde04bd270676d357ba"} Jan 21 16:14:39 crc kubenswrapper[4760]: I0121 16:14:39.960013 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerID="aa7ef69b5db8d297d8f85807459bc3203b8573e16d621c9a2ab299b19c2999fd" exitCode=0 Jan 21 16:14:39 crc kubenswrapper[4760]: I0121 16:14:39.960412 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"aa7ef69b5db8d297d8f85807459bc3203b8573e16d621c9a2ab299b19c2999fd"} Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.158507 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.165208 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.184132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.249966 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.250120 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.250200 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.351490 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.351559 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.351669 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.352144 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.352218 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.381920 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"redhat-marketplace-bsb4r\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.491559 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:40 crc kubenswrapper[4760]: I0121 16:14:40.985217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerStarted","Data":"a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913"} Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.014088 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5sm8l" podStartSLOduration=2.455659106 podStartE2EDuration="5.014064361s" podCreationTimestamp="2026-01-21 16:14:36 +0000 UTC" firstStartedPulling="2026-01-21 16:14:37.892811749 +0000 UTC m=+1648.560581327" lastFinishedPulling="2026-01-21 16:14:40.451217014 +0000 UTC m=+1651.118986582" observedRunningTime="2026-01-21 16:14:41.011860077 +0000 UTC m=+1651.679629665" watchObservedRunningTime="2026-01-21 16:14:41.014064361 +0000 UTC m=+1651.681833939" Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.099533 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.997947 4760 generic.go:334] "Generic (PLEG): container finished" podID="03eab569-ca7e-4701-853b-5468283a3a57" containerID="125606aa89d7a0dbe5396a8d273463aa7c2d860f5e8c13909f180283e1974180" exitCode=0 Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.998056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"125606aa89d7a0dbe5396a8d273463aa7c2d860f5e8c13909f180283e1974180"} Jan 21 16:14:41 crc kubenswrapper[4760]: I0121 16:14:41.998452 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerStarted","Data":"86384857510fb387c83034782606c93c059f6b8744109e080e8c0ea8dc79149c"} Jan 21 16:14:44 crc kubenswrapper[4760]: I0121 16:14:44.020856 4760 generic.go:334] "Generic (PLEG): container finished" podID="03eab569-ca7e-4701-853b-5468283a3a57" containerID="af0de97da87dbcfa6deba50401f841faaa4f2881fc793c39f60e9ae56cb5c4aa" exitCode=0 Jan 21 16:14:44 crc kubenswrapper[4760]: I0121 16:14:44.020953 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"af0de97da87dbcfa6deba50401f841faaa4f2881fc793c39f60e9ae56cb5c4aa"} Jan 21 16:14:46 crc kubenswrapper[4760]: I0121 16:14:46.041457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerStarted","Data":"f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d"} Jan 21 16:14:46 crc kubenswrapper[4760]: I0121 16:14:46.065019 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bsb4r" podStartSLOduration=3.06202727 podStartE2EDuration="6.06499438s" podCreationTimestamp="2026-01-21 16:14:40 +0000 UTC" firstStartedPulling="2026-01-21 16:14:41.999679936 +0000 UTC m=+1652.667449514" lastFinishedPulling="2026-01-21 16:14:45.002647046 +0000 UTC m=+1655.670416624" observedRunningTime="2026-01-21 16:14:46.063842802 +0000 UTC m=+1656.731612400" watchObservedRunningTime="2026-01-21 16:14:46.06499438 +0000 UTC m=+1656.732763958" Jan 21 16:14:47 crc kubenswrapper[4760]: I0121 16:14:47.131146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:47 crc kubenswrapper[4760]: I0121 16:14:47.131214 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:47 crc kubenswrapper[4760]: I0121 16:14:47.179532 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:48 crc kubenswrapper[4760]: I0121 16:14:48.108681 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:48 crc kubenswrapper[4760]: I0121 16:14:48.341831 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.097944 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5sm8l" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" containerID="cri-o://a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913" gracePeriod=2 Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.492527 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.492588 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.580129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.946452 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:14:50 crc kubenswrapper[4760]: I0121 16:14:50.946535 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:14:51 crc kubenswrapper[4760]: I0121 16:14:51.168250 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:51 crc kubenswrapper[4760]: I0121 16:14:51.741882 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.120127 4760 generic.go:334] "Generic (PLEG): container finished" podID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerID="a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913" exitCode=0 Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.120202 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913"} Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.615285 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.693959 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") pod \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.694159 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") pod \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.694303 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") pod \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\" (UID: \"5d54d177-6b16-47aa-929d-7eb4e8d986ba\") " Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.695593 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities" (OuterVolumeSpecName: "utilities") pod "5d54d177-6b16-47aa-929d-7eb4e8d986ba" (UID: "5d54d177-6b16-47aa-929d-7eb4e8d986ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.702636 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m" (OuterVolumeSpecName: "kube-api-access-6gh8m") pod "5d54d177-6b16-47aa-929d-7eb4e8d986ba" (UID: "5d54d177-6b16-47aa-929d-7eb4e8d986ba"). InnerVolumeSpecName "kube-api-access-6gh8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.749177 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d54d177-6b16-47aa-929d-7eb4e8d986ba" (UID: "5d54d177-6b16-47aa-929d-7eb4e8d986ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.797256 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gh8m\" (UniqueName: \"kubernetes.io/projected/5d54d177-6b16-47aa-929d-7eb4e8d986ba-kube-api-access-6gh8m\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.797316 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:52 crc kubenswrapper[4760]: I0121 16:14:52.797354 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d54d177-6b16-47aa-929d-7eb4e8d986ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133546 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5sm8l" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133535 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5sm8l" event={"ID":"5d54d177-6b16-47aa-929d-7eb4e8d986ba","Type":"ContainerDied","Data":"8a3703464c0489d680938da9d2148fbab1ba5833e7758cde04bd270676d357ba"} Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133620 4760 scope.go:117] "RemoveContainer" containerID="a4e49ba6c40beb8e3b9d53b462d2b95be82ac637c67f52917f3e4bc4825b3913" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.133667 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bsb4r" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" containerID="cri-o://f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d" gracePeriod=2 Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.161643 4760 scope.go:117] "RemoveContainer" containerID="aa7ef69b5db8d297d8f85807459bc3203b8573e16d621c9a2ab299b19c2999fd" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.178145 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.186295 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5sm8l"] Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.205709 4760 scope.go:117] "RemoveContainer" containerID="0194dd68535f9956917c79dbeb858d8724b0d065e01f1d813aed053dc89abfe8" Jan 21 16:14:53 crc kubenswrapper[4760]: I0121 16:14:53.636908 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" path="/var/lib/kubelet/pods/5d54d177-6b16-47aa-929d-7eb4e8d986ba/volumes" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.146925 4760 generic.go:334] "Generic (PLEG): container finished" podID="03eab569-ca7e-4701-853b-5468283a3a57" containerID="f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d" exitCode=0 Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.146976 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d"} Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.501990 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.526889 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") pod \"03eab569-ca7e-4701-853b-5468283a3a57\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.526951 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") pod \"03eab569-ca7e-4701-853b-5468283a3a57\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.527169 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") pod \"03eab569-ca7e-4701-853b-5468283a3a57\" (UID: \"03eab569-ca7e-4701-853b-5468283a3a57\") " Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.529091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities" (OuterVolumeSpecName: "utilities") pod "03eab569-ca7e-4701-853b-5468283a3a57" (UID: "03eab569-ca7e-4701-853b-5468283a3a57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.555610 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03eab569-ca7e-4701-853b-5468283a3a57" (UID: "03eab569-ca7e-4701-853b-5468283a3a57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.557887 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm" (OuterVolumeSpecName: "kube-api-access-xxrrm") pod "03eab569-ca7e-4701-853b-5468283a3a57" (UID: "03eab569-ca7e-4701-853b-5468283a3a57"). InnerVolumeSpecName "kube-api-access-xxrrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.630558 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.630610 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxrrm\" (UniqueName: \"kubernetes.io/projected/03eab569-ca7e-4701-853b-5468283a3a57-kube-api-access-xxrrm\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:54 crc kubenswrapper[4760]: I0121 16:14:54.630623 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03eab569-ca7e-4701-853b-5468283a3a57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.158002 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bsb4r" event={"ID":"03eab569-ca7e-4701-853b-5468283a3a57","Type":"ContainerDied","Data":"86384857510fb387c83034782606c93c059f6b8744109e080e8c0ea8dc79149c"} Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.159064 4760 scope.go:117] "RemoveContainer" containerID="f39231e8e448e43958e5e9a017c90ba025dc1861f79a899fa3a31c56afe1352d" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.158095 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bsb4r" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.240444 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.246837 4760 scope.go:117] "RemoveContainer" containerID="af0de97da87dbcfa6deba50401f841faaa4f2881fc793c39f60e9ae56cb5c4aa" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.252716 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bsb4r"] Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.269721 4760 scope.go:117] "RemoveContainer" containerID="125606aa89d7a0dbe5396a8d273463aa7c2d860f5e8c13909f180283e1974180" Jan 21 16:14:55 crc kubenswrapper[4760]: I0121 16:14:55.634465 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03eab569-ca7e-4701-853b-5468283a3a57" path="/var/lib/kubelet/pods/03eab569-ca7e-4701-853b-5468283a3a57/volumes" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.162123 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163311 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163357 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163377 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163384 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163407 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163415 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163430 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163438 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-content" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163465 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163475 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="extract-utilities" Jan 21 16:15:00 crc kubenswrapper[4760]: E0121 16:15:00.163492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163498 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163755 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d54d177-6b16-47aa-929d-7eb4e8d986ba" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.163774 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="03eab569-ca7e-4701-853b-5468283a3a57" containerName="registry-server" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.164612 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.167923 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.168714 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.173069 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.240476 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.240551 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.240584 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.342753 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.342863 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.342927 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.344485 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.351357 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.369181 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"collect-profiles-29483535-65sc9\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:00 crc kubenswrapper[4760]: I0121 16:15:00.488431 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:01 crc kubenswrapper[4760]: I0121 16:15:01.091765 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 16:15:01 crc kubenswrapper[4760]: I0121 16:15:01.224598 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" event={"ID":"751cfeab-2105-46b2-93bd-d5b7b09c8ee4","Type":"ContainerStarted","Data":"4ebfb277fba8166bcf33bf3adbb69809f76f9258ebcc88af8ad291ddb865ca0f"} Jan 21 16:15:02 crc kubenswrapper[4760]: I0121 16:15:02.236645 4760 generic.go:334] "Generic (PLEG): container finished" podID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerID="1240d6fd1cedd54b513b218e448ec1051d4c4912f66b7e655805dc838e90a14c" exitCode=0 Jan 21 16:15:02 crc kubenswrapper[4760]: I0121 16:15:02.236737 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" event={"ID":"751cfeab-2105-46b2-93bd-d5b7b09c8ee4","Type":"ContainerDied","Data":"1240d6fd1cedd54b513b218e448ec1051d4c4912f66b7e655805dc838e90a14c"} Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.621666 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.719768 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") pod \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.719826 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") pod \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.719865 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") pod \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\" (UID: \"751cfeab-2105-46b2-93bd-d5b7b09c8ee4\") " Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.720528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume" (OuterVolumeSpecName: "config-volume") pod "751cfeab-2105-46b2-93bd-d5b7b09c8ee4" (UID: "751cfeab-2105-46b2-93bd-d5b7b09c8ee4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.721874 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.747104 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "751cfeab-2105-46b2-93bd-d5b7b09c8ee4" (UID: "751cfeab-2105-46b2-93bd-d5b7b09c8ee4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.748599 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk" (OuterVolumeSpecName: "kube-api-access-qbdwk") pod "751cfeab-2105-46b2-93bd-d5b7b09c8ee4" (UID: "751cfeab-2105-46b2-93bd-d5b7b09c8ee4"). InnerVolumeSpecName "kube-api-access-qbdwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.824474 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:03 crc kubenswrapper[4760]: I0121 16:15:03.824528 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbdwk\" (UniqueName: \"kubernetes.io/projected/751cfeab-2105-46b2-93bd-d5b7b09c8ee4-kube-api-access-qbdwk\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:04 crc kubenswrapper[4760]: I0121 16:15:04.259226 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" event={"ID":"751cfeab-2105-46b2-93bd-d5b7b09c8ee4","Type":"ContainerDied","Data":"4ebfb277fba8166bcf33bf3adbb69809f76f9258ebcc88af8ad291ddb865ca0f"} Jan 21 16:15:04 crc kubenswrapper[4760]: I0121 16:15:04.259271 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9" Jan 21 16:15:04 crc kubenswrapper[4760]: I0121 16:15:04.259285 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ebfb277fba8166bcf33bf3adbb69809f76f9258ebcc88af8ad291ddb865ca0f" Jan 21 16:15:07 crc kubenswrapper[4760]: I0121 16:15:07.051001 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:15:07 crc kubenswrapper[4760]: I0121 16:15:07.063004 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-chqpt"] Jan 21 16:15:07 crc kubenswrapper[4760]: I0121 16:15:07.635426 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956c0478-0da7-419e-b003-65e479971040" path="/var/lib/kubelet/pods/956c0478-0da7-419e-b003-65e479971040/volumes" Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.031007 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.039042 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-995cl"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.048946 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.060064 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.068194 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f76c-account-create-update-9wpkx"] Jan 21 16:15:08 crc kubenswrapper[4760]: I0121 16:15:08.075599 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7726-account-create-update-jdlpj"] Jan 21 16:15:09 crc kubenswrapper[4760]: I0121 16:15:09.634951 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df56532-7a5e-43a1-88cd-2d55f731b0f1" path="/var/lib/kubelet/pods/0df56532-7a5e-43a1-88cd-2d55f731b0f1/volumes" Jan 21 16:15:09 crc kubenswrapper[4760]: I0121 16:15:09.636080 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1213619b-eee7-4221-9083-06362fc707f5" path="/var/lib/kubelet/pods/1213619b-eee7-4221-9083-06362fc707f5/volumes" Jan 21 16:15:09 crc kubenswrapper[4760]: I0121 16:15:09.636689 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de43463-27f1-4fbe-959a-6c6446414177" path="/var/lib/kubelet/pods/3de43463-27f1-4fbe-959a-6c6446414177/volumes" Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.035096 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.044382 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.053066 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7cb9-account-create-update-8b224"] Jan 21 16:15:14 crc kubenswrapper[4760]: I0121 16:15:14.062138 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c8jlj"] Jan 21 16:15:15 crc kubenswrapper[4760]: I0121 16:15:15.632031 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a391de4-6ff8-49ac-93cb-98b98202f3f1" path="/var/lib/kubelet/pods/7a391de4-6ff8-49ac-93cb-98b98202f3f1/volumes" Jan 21 16:15:15 crc kubenswrapper[4760]: I0121 16:15:15.633014 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1305608-194d-4c7f-b3c7-8d6925fed34f" path="/var/lib/kubelet/pods/d1305608-194d-4c7f-b3c7-8d6925fed34f/volumes" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.946356 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.946953 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.947042 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.948145 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:15:20 crc kubenswrapper[4760]: I0121 16:15:20.948277 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" gracePeriod=600 Jan 21 16:15:21 crc kubenswrapper[4760]: E0121 16:15:21.071870 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.443049 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" exitCode=0 Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.443158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965"} Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.443730 4760 scope.go:117] "RemoveContainer" containerID="e2bad28ace3137e8b3c05faf3797d4cccff7ccfe4381357924a1c6533e28ed41" Jan 21 16:15:21 crc kubenswrapper[4760]: I0121 16:15:21.444769 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:15:21 crc kubenswrapper[4760]: E0121 16:15:21.445297 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.053148 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.068917 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.080374 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.090289 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.097822 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.111417 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.113127 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7dgzd"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.121861 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-884n6"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.129954 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b0d0-account-create-update-jg2cl"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.139863 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4b52-account-create-update-fnb8z"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.147886 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-nfdkf"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.155249 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-faad-account-create-update-4btpg"] Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.634110 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f90ad69-2b58-48f1-a605-63486d38956f" path="/var/lib/kubelet/pods/5f90ad69-2b58-48f1-a605-63486d38956f/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.635364 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85bbee56-6cf4-4653-b69f-59b68063b3a1" path="/var/lib/kubelet/pods/85bbee56-6cf4-4653-b69f-59b68063b3a1/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.636318 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988b2688-7981-4093-a1d2-45796fb69f52" path="/var/lib/kubelet/pods/988b2688-7981-4093-a1d2-45796fb69f52/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.637513 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba247535-e91f-47de-a9c2-0ce8e91f8d23" path="/var/lib/kubelet/pods/ba247535-e91f-47de-a9c2-0ce8e91f8d23/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.639610 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29c2669-d63b-4ac4-8680-fc14ced158f1" path="/var/lib/kubelet/pods/c29c2669-d63b-4ac4-8680-fc14ced158f1/volumes" Jan 21 16:15:23 crc kubenswrapper[4760]: I0121 16:15:23.640662 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbef96c-1bfa-412a-a49d-460b6f6d90f9" path="/var/lib/kubelet/pods/ddbef96c-1bfa-412a-a49d-460b6f6d90f9/volumes" Jan 21 16:15:24 crc kubenswrapper[4760]: I0121 16:15:24.476799 4760 generic.go:334] "Generic (PLEG): container finished" podID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerID="defdf7fa7292a1d9855424da069fd6a8bf3368105d1a5f8798dea777f52df2a8" exitCode=0 Jan 21 16:15:24 crc kubenswrapper[4760]: I0121 16:15:24.476868 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerDied","Data":"defdf7fa7292a1d9855424da069fd6a8bf3368105d1a5f8798dea777f52df2a8"} Jan 21 16:15:25 crc kubenswrapper[4760]: I0121 16:15:25.910072 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021304 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021518 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.021746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") pod \"f4ba3e4f-146a-4af6-885a-877760c90ce0\" (UID: \"f4ba3e4f-146a-4af6-885a-877760c90ce0\") " Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.028773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj" (OuterVolumeSpecName: "kube-api-access-5fsgj") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "kube-api-access-5fsgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.028801 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.052426 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.055632 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory" (OuterVolumeSpecName: "inventory") pod "f4ba3e4f-146a-4af6-885a-877760c90ce0" (UID: "f4ba3e4f-146a-4af6-885a-877760c90ce0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125227 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125307 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fsgj\" (UniqueName: \"kubernetes.io/projected/f4ba3e4f-146a-4af6-885a-877760c90ce0-kube-api-access-5fsgj\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125367 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.125387 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4ba3e4f-146a-4af6-885a-877760c90ce0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.500064 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" event={"ID":"f4ba3e4f-146a-4af6-885a-877760c90ce0","Type":"ContainerDied","Data":"eec29a2bedebeb1a33619871536ab8542a1b2f6f9a986c85306788a51f25389a"} Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.500445 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec29a2bedebeb1a33619871536ab8542a1b2f6f9a986c85306788a51f25389a" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.500102 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586291 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829"] Jan 21 16:15:26 crc kubenswrapper[4760]: E0121 16:15:26.586710 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586727 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: E0121 16:15:26.586760 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerName="collect-profiles" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586767 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerName="collect-profiles" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586951 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4ba3e4f-146a-4af6-885a-877760c90ce0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.586978 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" containerName="collect-profiles" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.587603 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.590589 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.590790 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.590850 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.593397 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.604653 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829"] Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.737820 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.738154 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.738277 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.840215 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.840358 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.840418 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.847405 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.852424 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.857882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sd829\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:26 crc kubenswrapper[4760]: I0121 16:15:26.906212 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:15:27 crc kubenswrapper[4760]: I0121 16:15:27.238543 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829"] Jan 21 16:15:27 crc kubenswrapper[4760]: I0121 16:15:27.513619 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerStarted","Data":"c818bd2294ea7ee26a011a1c01cab4b1acea0306875c8299e2fca36d5b2b1688"} Jan 21 16:15:28 crc kubenswrapper[4760]: I0121 16:15:28.526112 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerStarted","Data":"fd08c269566512dc50aea9892d7455be0539de1acae159fe74c554b0b6de4f92"} Jan 21 16:15:28 crc kubenswrapper[4760]: I0121 16:15:28.544862 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" podStartSLOduration=1.7840344799999999 podStartE2EDuration="2.544834286s" podCreationTimestamp="2026-01-21 16:15:26 +0000 UTC" firstStartedPulling="2026-01-21 16:15:27.24323036 +0000 UTC m=+1697.910999938" lastFinishedPulling="2026-01-21 16:15:28.004030166 +0000 UTC m=+1698.671799744" observedRunningTime="2026-01-21 16:15:28.544530245 +0000 UTC m=+1699.212299823" watchObservedRunningTime="2026-01-21 16:15:28.544834286 +0000 UTC m=+1699.212603864" Jan 21 16:15:34 crc kubenswrapper[4760]: I0121 16:15:34.623210 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:15:34 crc kubenswrapper[4760]: E0121 16:15:34.624221 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.429576 4760 scope.go:117] "RemoveContainer" containerID="0e6d219ee4178496a68d572f97cdfac7435b2070269de1ee0c2609d9a8855f3a" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.453776 4760 scope.go:117] "RemoveContainer" containerID="f514b862ce6febf31745a4600f53c62282c7ae1396e230bccd313518c93d17f3" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.501964 4760 scope.go:117] "RemoveContainer" containerID="a6817629fde9a036c8116050940d3d6eb527900cceb0a266e33fc80b17fab3a5" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.575289 4760 scope.go:117] "RemoveContainer" containerID="0cb7c1192b9373f0abfb8527833717ba33c1e62912901a62d92e86f693360455" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.625900 4760 scope.go:117] "RemoveContainer" containerID="b5fa9c7a45fe6e80d225cf15affa00928bbc3595be19eaab232935c968758bd4" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.659381 4760 scope.go:117] "RemoveContainer" containerID="7de641f204067609e49f50987152c414d28eafc669df0fe2da325a6f2ce739fc" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.707629 4760 scope.go:117] "RemoveContainer" containerID="076671479fb4a9b0098f421cb1f3323d0a1e7ed2c971c735c221adbcc2c7c91d" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.730082 4760 scope.go:117] "RemoveContainer" containerID="fe5781da8649c8f98ecf95f282a3089bdbf617af0333c695ebaab112efe3ad7d" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.752093 4760 scope.go:117] "RemoveContainer" containerID="d739d05fc22214fe3c7c409de25f8b4ecba3a6dc0a47b8d9d33db77a68c9cda6" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.774484 4760 scope.go:117] "RemoveContainer" containerID="90db8f63e9a72921008b460ecbc6a78ebe203277ffd590f54ada4404d5230b48" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.799612 4760 scope.go:117] "RemoveContainer" containerID="10fe11ee0330d53dd2513eb5aab1ba95be58078705bc92fe3db3946600df96c8" Jan 21 16:15:40 crc kubenswrapper[4760]: I0121 16:15:40.818507 4760 scope.go:117] "RemoveContainer" containerID="f3059f2297611d8f3f39a3872eddb93d73f1a7a124c9fad360dc2e76972fdc19" Jan 21 16:15:49 crc kubenswrapper[4760]: I0121 16:15:49.635773 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:15:49 crc kubenswrapper[4760]: E0121 16:15:49.636570 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:15:52 crc kubenswrapper[4760]: I0121 16:15:52.055501 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:15:52 crc kubenswrapper[4760]: I0121 16:15:52.066940 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-28f8s"] Jan 21 16:15:53 crc kubenswrapper[4760]: I0121 16:15:53.635338 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5977817a-76bd-4df7-b942-4553334f046c" path="/var/lib/kubelet/pods/5977817a-76bd-4df7-b942-4553334f046c/volumes" Jan 21 16:15:56 crc kubenswrapper[4760]: I0121 16:15:56.031895 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:15:56 crc kubenswrapper[4760]: I0121 16:15:56.041234 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tjb9t"] Jan 21 16:15:57 crc kubenswrapper[4760]: I0121 16:15:57.634579 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4e60fd-bb4c-4460-87db-729dac85afbc" path="/var/lib/kubelet/pods/6d4e60fd-bb4c-4460-87db-729dac85afbc/volumes" Jan 21 16:16:00 crc kubenswrapper[4760]: I0121 16:16:00.623997 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:00 crc kubenswrapper[4760]: E0121 16:16:00.626949 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:13 crc kubenswrapper[4760]: I0121 16:16:13.623567 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:13 crc kubenswrapper[4760]: E0121 16:16:13.624368 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:27 crc kubenswrapper[4760]: I0121 16:16:27.623215 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:27 crc kubenswrapper[4760]: E0121 16:16:27.625113 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:41 crc kubenswrapper[4760]: I0121 16:16:41.045948 4760 scope.go:117] "RemoveContainer" containerID="16560ea06e9421f0e5c8aafa10dc7a4873736db8d680f7d5984ad145fec2490a" Jan 21 16:16:41 crc kubenswrapper[4760]: I0121 16:16:41.150446 4760 scope.go:117] "RemoveContainer" containerID="81025c437bf683bd16828e1e94a515f5499117166adbfe37c74158127779092b" Jan 21 16:16:42 crc kubenswrapper[4760]: I0121 16:16:42.623369 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:42 crc kubenswrapper[4760]: E0121 16:16:42.625290 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:16:48 crc kubenswrapper[4760]: I0121 16:16:48.043995 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:16:48 crc kubenswrapper[4760]: I0121 16:16:48.053615 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-tp55g"] Jan 21 16:16:49 crc kubenswrapper[4760]: I0121 16:16:49.644192 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753473df-c019-484a-95d5-01f46173e10a" path="/var/lib/kubelet/pods/753473df-c019-484a-95d5-01f46173e10a/volumes" Jan 21 16:16:50 crc kubenswrapper[4760]: I0121 16:16:50.044531 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:16:50 crc kubenswrapper[4760]: I0121 16:16:50.053846 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f8htt"] Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.031221 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.039029 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-65fzw"] Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.646929 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20523ada-9ffa-4d1d-bf08-913672aa7df6" path="/var/lib/kubelet/pods/20523ada-9ffa-4d1d-bf08-913672aa7df6/volumes" Jan 21 16:16:51 crc kubenswrapper[4760]: I0121 16:16:51.648122 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="820ab298-8a58-4ac5-b7d2-ff030c6d2aff" path="/var/lib/kubelet/pods/820ab298-8a58-4ac5-b7d2-ff030c6d2aff/volumes" Jan 21 16:16:53 crc kubenswrapper[4760]: I0121 16:16:53.622828 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:16:53 crc kubenswrapper[4760]: E0121 16:16:53.623136 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:04 crc kubenswrapper[4760]: I0121 16:17:04.623494 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:04 crc kubenswrapper[4760]: E0121 16:17:04.624242 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:06 crc kubenswrapper[4760]: I0121 16:17:06.041230 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:17:06 crc kubenswrapper[4760]: I0121 16:17:06.049573 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pgvwf"] Jan 21 16:17:07 crc kubenswrapper[4760]: I0121 16:17:07.633187 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e272905b-28ec-4f49-8c51-f5c5d97c4a9d" path="/var/lib/kubelet/pods/e272905b-28ec-4f49-8c51-f5c5d97c4a9d/volumes" Jan 21 16:17:09 crc kubenswrapper[4760]: I0121 16:17:09.031021 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:17:09 crc kubenswrapper[4760]: I0121 16:17:09.042713 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-j76bd"] Jan 21 16:17:09 crc kubenswrapper[4760]: I0121 16:17:09.646506 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf0e00e-fc38-45a9-8615-dd5398ed1209" path="/var/lib/kubelet/pods/3bf0e00e-fc38-45a9-8615-dd5398ed1209/volumes" Jan 21 16:17:10 crc kubenswrapper[4760]: I0121 16:17:10.035981 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:17:10 crc kubenswrapper[4760]: I0121 16:17:10.045077 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nf7wp"] Jan 21 16:17:11 crc kubenswrapper[4760]: I0121 16:17:11.634875 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4fdfaae-d8ad-46d6-b30a-1b671408ca51" path="/var/lib/kubelet/pods/c4fdfaae-d8ad-46d6-b30a-1b671408ca51/volumes" Jan 21 16:17:18 crc kubenswrapper[4760]: I0121 16:17:18.623268 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:18 crc kubenswrapper[4760]: E0121 16:17:18.624051 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:28 crc kubenswrapper[4760]: I0121 16:17:28.615185 4760 generic.go:334] "Generic (PLEG): container finished" podID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerID="fd08c269566512dc50aea9892d7455be0539de1acae159fe74c554b0b6de4f92" exitCode=0 Jan 21 16:17:28 crc kubenswrapper[4760]: I0121 16:17:28.615275 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerDied","Data":"fd08c269566512dc50aea9892d7455be0539de1acae159fe74c554b0b6de4f92"} Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.038592 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.130483 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") pod \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.130735 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") pod \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.130784 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") pod \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\" (UID: \"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46\") " Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.136540 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7" (OuterVolumeSpecName: "kube-api-access-rcsb7") pod "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" (UID: "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46"). InnerVolumeSpecName "kube-api-access-rcsb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.163936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" (UID: "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.168714 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory" (OuterVolumeSpecName: "inventory") pod "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" (UID: "2ffc46c3-eeae-4b68-bede-4c1e5af6fe46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.233522 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsb7\" (UniqueName: \"kubernetes.io/projected/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-kube-api-access-rcsb7\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.233564 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.233575 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ffc46c3-eeae-4b68-bede-4c1e5af6fe46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.622488 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:30 crc kubenswrapper[4760]: E0121 16:17:30.622843 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.640779 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" event={"ID":"2ffc46c3-eeae-4b68-bede-4c1e5af6fe46","Type":"ContainerDied","Data":"c818bd2294ea7ee26a011a1c01cab4b1acea0306875c8299e2fca36d5b2b1688"} Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.640826 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sd829" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.640830 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c818bd2294ea7ee26a011a1c01cab4b1acea0306875c8299e2fca36d5b2b1688" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.729636 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns"] Jan 21 16:17:30 crc kubenswrapper[4760]: E0121 16:17:30.730096 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.730121 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.730306 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffc46c3-eeae-4b68-bede-4c1e5af6fe46" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.731257 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.733578 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.733577 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.733660 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.734467 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.740022 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns"] Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.846117 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.846247 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.846297 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.948108 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.948203 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.948847 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.953210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.956396 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:30 crc kubenswrapper[4760]: I0121 16:17:30.968866 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-c7vns\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:31 crc kubenswrapper[4760]: I0121 16:17:31.057213 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:17:31 crc kubenswrapper[4760]: I0121 16:17:31.392541 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns"] Jan 21 16:17:31 crc kubenswrapper[4760]: I0121 16:17:31.651742 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerStarted","Data":"15828a65854121830bc8dd087c5abe31f28f6fd590ea1092abbc0d2df91c7afb"} Jan 21 16:17:33 crc kubenswrapper[4760]: I0121 16:17:33.669262 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerStarted","Data":"e259a15eb5e97a769205ef21f8363656e1d7ea2b576cd3f5946a3708bee78524"} Jan 21 16:17:33 crc kubenswrapper[4760]: I0121 16:17:33.688715 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" podStartSLOduration=2.443699727 podStartE2EDuration="3.688695717s" podCreationTimestamp="2026-01-21 16:17:30 +0000 UTC" firstStartedPulling="2026-01-21 16:17:31.398502526 +0000 UTC m=+1822.066272104" lastFinishedPulling="2026-01-21 16:17:32.643498486 +0000 UTC m=+1823.311268094" observedRunningTime="2026-01-21 16:17:33.687307982 +0000 UTC m=+1824.355077570" watchObservedRunningTime="2026-01-21 16:17:33.688695717 +0000 UTC m=+1824.356465295" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.286411 4760 scope.go:117] "RemoveContainer" containerID="898b834ef1be751c68f08b1b203b5655c64ba2844d18217a131ec12119259d69" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.317302 4760 scope.go:117] "RemoveContainer" containerID="a1d8ef9f5a82dd4c8078328950cb300d1c89fe54dff0b7699d3d291f3d477977" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.400951 4760 scope.go:117] "RemoveContainer" containerID="598ec327f33c8f0775a344d68602f27c1cbe21cb28c1e14088316a4fccca40b4" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.444993 4760 scope.go:117] "RemoveContainer" containerID="5ec36ce6aab699462c440d289d5c9b3d34f0e04e9578c26f7a3403c0f8a3069f" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.501844 4760 scope.go:117] "RemoveContainer" containerID="ba59503ee28149f2f6bd1845497fbf26cee641517850130c45c50378919dce1a" Jan 21 16:17:41 crc kubenswrapper[4760]: I0121 16:17:41.538147 4760 scope.go:117] "RemoveContainer" containerID="f2f9c962eea17a5ad22e2d097f61c47f2fc98c187b725e9f8a87fc5cff3b07fb" Jan 21 16:17:45 crc kubenswrapper[4760]: I0121 16:17:45.623451 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:17:45 crc kubenswrapper[4760]: E0121 16:17:45.624305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.039906 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.047378 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.056134 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-wgjbm"] Jan 21 16:17:58 crc kubenswrapper[4760]: I0121 16:17:58.063082 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5lmzc"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.033143 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.041224 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.050992 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.058375 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.064982 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-s6bgh"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.071357 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2ffe-account-create-update-sq7hp"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.077559 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1b63-account-create-update-s5scn"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.083782 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-49b7-account-create-update-cbnck"] Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.636696 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11004437-56c2-4e20-911b-e31d6726fabc" path="/var/lib/kubelet/pods/11004437-56c2-4e20-911b-e31d6726fabc/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.637575 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="684f7edc-9176-4aeb-8b75-8f083ba14d04" path="/var/lib/kubelet/pods/684f7edc-9176-4aeb-8b75-8f083ba14d04/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.638438 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5e4041-ff0a-416e-b541-480b17fcc32e" path="/var/lib/kubelet/pods/7d5e4041-ff0a-416e-b541-480b17fcc32e/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.639196 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ff3135-0e1c-46b4-a3a2-5520a7d505da" path="/var/lib/kubelet/pods/83ff3135-0e1c-46b4-a3a2-5520a7d505da/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.640735 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95551b69-b405-4008-b600-7010cea057a2" path="/var/lib/kubelet/pods/95551b69-b405-4008-b600-7010cea057a2/volumes" Jan 21 16:17:59 crc kubenswrapper[4760]: I0121 16:17:59.641523 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2bb047a-4a1f-4617-8d7a-66f80c84ea4a" path="/var/lib/kubelet/pods/b2bb047a-4a1f-4617-8d7a-66f80c84ea4a/volumes" Jan 21 16:18:00 crc kubenswrapper[4760]: I0121 16:18:00.622927 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:00 crc kubenswrapper[4760]: E0121 16:18:00.623248 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:12 crc kubenswrapper[4760]: I0121 16:18:12.623657 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:12 crc kubenswrapper[4760]: E0121 16:18:12.624450 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:27 crc kubenswrapper[4760]: I0121 16:18:27.623400 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:27 crc kubenswrapper[4760]: E0121 16:18:27.624721 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.622839 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:41 crc kubenswrapper[4760]: E0121 16:18:41.623961 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.687880 4760 scope.go:117] "RemoveContainer" containerID="ced70d1d4870a2dc28e44804244b887878eb363beffb85bfdf18c1407d5b7ab5" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.732538 4760 scope.go:117] "RemoveContainer" containerID="bab012f846da9af9ee21c8e00f95dcde0d5a8453a7d5642dacc464322ba9fee4" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.785588 4760 scope.go:117] "RemoveContainer" containerID="e21e8812181d09da583964e812ef51190f4006ea24869fc79e02b4f805740aad" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.824448 4760 scope.go:117] "RemoveContainer" containerID="d45fc96b5bde6b50e5a181a4a1b65a2feaa27b328ff69b7618a0540b8bfa00ea" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.867931 4760 scope.go:117] "RemoveContainer" containerID="9efd7cd0b2508fe6b4994db87ba2ac3d840259ba581e4a0b9385f3fb037f31a4" Jan 21 16:18:41 crc kubenswrapper[4760]: I0121 16:18:41.920636 4760 scope.go:117] "RemoveContainer" containerID="aea9fe4bfa7bd096a20c98a988cc0352b5060d7074bdc9b9eacffe3c811bf1ca" Jan 21 16:18:47 crc kubenswrapper[4760]: I0121 16:18:47.324765 4760 generic.go:334] "Generic (PLEG): container finished" podID="8adc5733-eeac-4148-878a-61b908f0a85b" containerID="e259a15eb5e97a769205ef21f8363656e1d7ea2b576cd3f5946a3708bee78524" exitCode=0 Jan 21 16:18:47 crc kubenswrapper[4760]: I0121 16:18:47.324856 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerDied","Data":"e259a15eb5e97a769205ef21f8363656e1d7ea2b576cd3f5946a3708bee78524"} Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.737477 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.836030 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") pod \"8adc5733-eeac-4148-878a-61b908f0a85b\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.836176 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") pod \"8adc5733-eeac-4148-878a-61b908f0a85b\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.836239 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") pod \"8adc5733-eeac-4148-878a-61b908f0a85b\" (UID: \"8adc5733-eeac-4148-878a-61b908f0a85b\") " Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.842479 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9" (OuterVolumeSpecName: "kube-api-access-kpbk9") pod "8adc5733-eeac-4148-878a-61b908f0a85b" (UID: "8adc5733-eeac-4148-878a-61b908f0a85b"). InnerVolumeSpecName "kube-api-access-kpbk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.863751 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory" (OuterVolumeSpecName: "inventory") pod "8adc5733-eeac-4148-878a-61b908f0a85b" (UID: "8adc5733-eeac-4148-878a-61b908f0a85b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.868502 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8adc5733-eeac-4148-878a-61b908f0a85b" (UID: "8adc5733-eeac-4148-878a-61b908f0a85b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.938443 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.938493 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8adc5733-eeac-4148-878a-61b908f0a85b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:48 crc kubenswrapper[4760]: I0121 16:18:48.938507 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpbk9\" (UniqueName: \"kubernetes.io/projected/8adc5733-eeac-4148-878a-61b908f0a85b-kube-api-access-kpbk9\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.343091 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" event={"ID":"8adc5733-eeac-4148-878a-61b908f0a85b","Type":"ContainerDied","Data":"15828a65854121830bc8dd087c5abe31f28f6fd590ea1092abbc0d2df91c7afb"} Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.343155 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15828a65854121830bc8dd087c5abe31f28f6fd590ea1092abbc0d2df91c7afb" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.343240 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-c7vns" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.431086 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth"] Jan 21 16:18:49 crc kubenswrapper[4760]: E0121 16:18:49.431457 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8adc5733-eeac-4148-878a-61b908f0a85b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.431474 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="8adc5733-eeac-4148-878a-61b908f0a85b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.431718 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="8adc5733-eeac-4148-878a-61b908f0a85b" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.432357 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.434997 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.435153 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.435766 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.438837 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.447994 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.448150 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.448253 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.454316 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth"] Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.549732 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.549845 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.549912 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.553648 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.554269 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.571704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-csfth\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:49 crc kubenswrapper[4760]: I0121 16:18:49.749145 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.046828 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth"] Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.062412 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.066578 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.080483 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kwcw6"] Jan 21 16:18:50 crc kubenswrapper[4760]: I0121 16:18:50.359676 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerStarted","Data":"6181a120b8d301111e2c7d66a5b99addcd1373fdf4676204de822ced613c00eb"} Jan 21 16:18:51 crc kubenswrapper[4760]: I0121 16:18:51.369716 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerStarted","Data":"8fd7d90b83fbd1ede633903ff070582ff60b6312d2ca45c1106083e175052fc3"} Jan 21 16:18:51 crc kubenswrapper[4760]: I0121 16:18:51.389359 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" podStartSLOduration=1.42724115 podStartE2EDuration="2.389337851s" podCreationTimestamp="2026-01-21 16:18:49 +0000 UTC" firstStartedPulling="2026-01-21 16:18:50.066197442 +0000 UTC m=+1900.733967020" lastFinishedPulling="2026-01-21 16:18:51.028294143 +0000 UTC m=+1901.696063721" observedRunningTime="2026-01-21 16:18:51.389052013 +0000 UTC m=+1902.056821591" watchObservedRunningTime="2026-01-21 16:18:51.389337851 +0000 UTC m=+1902.057107439" Jan 21 16:18:51 crc kubenswrapper[4760]: I0121 16:18:51.637178 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bcfa69-f25f-4f8a-8018-664dbdf6e1d3" path="/var/lib/kubelet/pods/98bcfa69-f25f-4f8a-8018-664dbdf6e1d3/volumes" Jan 21 16:18:52 crc kubenswrapper[4760]: I0121 16:18:52.623420 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:18:52 crc kubenswrapper[4760]: E0121 16:18:52.623972 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:18:56 crc kubenswrapper[4760]: I0121 16:18:56.410371 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerDied","Data":"8fd7d90b83fbd1ede633903ff070582ff60b6312d2ca45c1106083e175052fc3"} Jan 21 16:18:56 crc kubenswrapper[4760]: I0121 16:18:56.410422 4760 generic.go:334] "Generic (PLEG): container finished" podID="9b589bc2-f08a-4319-a56e-145673e19eee" containerID="8fd7d90b83fbd1ede633903ff070582ff60b6312d2ca45c1106083e175052fc3" exitCode=0 Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.808379 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.904702 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") pod \"9b589bc2-f08a-4319-a56e-145673e19eee\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.904758 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") pod \"9b589bc2-f08a-4319-a56e-145673e19eee\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.904812 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") pod \"9b589bc2-f08a-4319-a56e-145673e19eee\" (UID: \"9b589bc2-f08a-4319-a56e-145673e19eee\") " Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.911151 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822" (OuterVolumeSpecName: "kube-api-access-nq822") pod "9b589bc2-f08a-4319-a56e-145673e19eee" (UID: "9b589bc2-f08a-4319-a56e-145673e19eee"). InnerVolumeSpecName "kube-api-access-nq822". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.931157 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b589bc2-f08a-4319-a56e-145673e19eee" (UID: "9b589bc2-f08a-4319-a56e-145673e19eee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:57 crc kubenswrapper[4760]: I0121 16:18:57.943205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory" (OuterVolumeSpecName: "inventory") pod "9b589bc2-f08a-4319-a56e-145673e19eee" (UID: "9b589bc2-f08a-4319-a56e-145673e19eee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.007834 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.007882 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq822\" (UniqueName: \"kubernetes.io/projected/9b589bc2-f08a-4319-a56e-145673e19eee-kube-api-access-nq822\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.007895 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9b589bc2-f08a-4319-a56e-145673e19eee-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.426900 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" event={"ID":"9b589bc2-f08a-4319-a56e-145673e19eee","Type":"ContainerDied","Data":"6181a120b8d301111e2c7d66a5b99addcd1373fdf4676204de822ced613c00eb"} Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.426949 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6181a120b8d301111e2c7d66a5b99addcd1373fdf4676204de822ced613c00eb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.426994 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-csfth" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.589014 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb"] Jan 21 16:18:58 crc kubenswrapper[4760]: E0121 16:18:58.589447 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b589bc2-f08a-4319-a56e-145673e19eee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.589468 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b589bc2-f08a-4319-a56e-145673e19eee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.589675 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b589bc2-f08a-4319-a56e-145673e19eee" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.590265 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.592859 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.594044 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.594110 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.594110 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.605474 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb"] Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.624943 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.625016 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.625084 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.726721 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.726780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.726840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.731655 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.732103 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.745310 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lcwmb\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:58 crc kubenswrapper[4760]: I0121 16:18:58.909742 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:18:59 crc kubenswrapper[4760]: I0121 16:18:59.234263 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb"] Jan 21 16:18:59 crc kubenswrapper[4760]: I0121 16:18:59.437165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerStarted","Data":"69f24c3f7be207796272e1568c0f048c997e1bc88094df1be346a8677aefc08e"} Jan 21 16:19:00 crc kubenswrapper[4760]: I0121 16:19:00.447094 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerStarted","Data":"da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a"} Jan 21 16:19:00 crc kubenswrapper[4760]: I0121 16:19:00.469286 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" podStartSLOduration=1.820466606 podStartE2EDuration="2.469266593s" podCreationTimestamp="2026-01-21 16:18:58 +0000 UTC" firstStartedPulling="2026-01-21 16:18:59.248505589 +0000 UTC m=+1909.916275167" lastFinishedPulling="2026-01-21 16:18:59.897305576 +0000 UTC m=+1910.565075154" observedRunningTime="2026-01-21 16:19:00.462678864 +0000 UTC m=+1911.130448452" watchObservedRunningTime="2026-01-21 16:19:00.469266593 +0000 UTC m=+1911.137036171" Jan 21 16:19:05 crc kubenswrapper[4760]: I0121 16:19:05.625694 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:05 crc kubenswrapper[4760]: E0121 16:19:05.626718 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:19:17 crc kubenswrapper[4760]: I0121 16:19:17.033860 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:19:17 crc kubenswrapper[4760]: I0121 16:19:17.044082 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nfqg4"] Jan 21 16:19:17 crc kubenswrapper[4760]: I0121 16:19:17.634122 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337" path="/var/lib/kubelet/pods/d82f3cc9-65d7-4dc1-9f5e-bd43ac4f1337/volumes" Jan 21 16:19:18 crc kubenswrapper[4760]: I0121 16:19:18.036134 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:19:18 crc kubenswrapper[4760]: I0121 16:19:18.046152 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jc7z7"] Jan 21 16:19:19 crc kubenswrapper[4760]: I0121 16:19:19.634372 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41faaec3-50be-468a-b6ea-8967aa8bbe99" path="/var/lib/kubelet/pods/41faaec3-50be-468a-b6ea-8967aa8bbe99/volumes" Jan 21 16:19:20 crc kubenswrapper[4760]: I0121 16:19:20.624806 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:20 crc kubenswrapper[4760]: E0121 16:19:20.625275 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:19:35 crc kubenswrapper[4760]: I0121 16:19:35.622499 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:35 crc kubenswrapper[4760]: E0121 16:19:35.623253 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:19:38 crc kubenswrapper[4760]: E0121 16:19:38.130191 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd89a08a9_deb3_4c27_ab2e_4fab854717cc.slice/crio-da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd89a08a9_deb3_4c27_ab2e_4fab854717cc.slice/crio-conmon-da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:19:38 crc kubenswrapper[4760]: I0121 16:19:38.764446 4760 generic.go:334] "Generic (PLEG): container finished" podID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerID="da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a" exitCode=0 Jan 21 16:19:38 crc kubenswrapper[4760]: I0121 16:19:38.764498 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerDied","Data":"da42a2a3371b3301af1f0891830559180cdd4b56a9841290c8443e12b20c1c0a"} Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.195419 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.255653 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") pod \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.255715 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") pod \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.255769 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") pod \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\" (UID: \"d89a08a9-deb3-4c27-ab2e-4fab854717cc\") " Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.266474 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js" (OuterVolumeSpecName: "kube-api-access-ds8js") pod "d89a08a9-deb3-4c27-ab2e-4fab854717cc" (UID: "d89a08a9-deb3-4c27-ab2e-4fab854717cc"). InnerVolumeSpecName "kube-api-access-ds8js". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.285686 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d89a08a9-deb3-4c27-ab2e-4fab854717cc" (UID: "d89a08a9-deb3-4c27-ab2e-4fab854717cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.286605 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory" (OuterVolumeSpecName: "inventory") pod "d89a08a9-deb3-4c27-ab2e-4fab854717cc" (UID: "d89a08a9-deb3-4c27-ab2e-4fab854717cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.360140 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds8js\" (UniqueName: \"kubernetes.io/projected/d89a08a9-deb3-4c27-ab2e-4fab854717cc-kube-api-access-ds8js\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.360211 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.360226 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d89a08a9-deb3-4c27-ab2e-4fab854717cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.788032 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" event={"ID":"d89a08a9-deb3-4c27-ab2e-4fab854717cc","Type":"ContainerDied","Data":"69f24c3f7be207796272e1568c0f048c997e1bc88094df1be346a8677aefc08e"} Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.788077 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f24c3f7be207796272e1568c0f048c997e1bc88094df1be346a8677aefc08e" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.788092 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lcwmb" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.875503 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq"] Jan 21 16:19:40 crc kubenswrapper[4760]: E0121 16:19:40.876003 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.876033 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.876243 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89a08a9-deb3-4c27-ab2e-4fab854717cc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.876925 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.878902 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.878980 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.879481 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.882753 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.885766 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq"] Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.971163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.971316 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:40 crc kubenswrapper[4760]: I0121 16:19:40.971413 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.073211 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.073658 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.073735 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.077820 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.078024 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.096587 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.194016 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.493609 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq"] Jan 21 16:19:41 crc kubenswrapper[4760]: I0121 16:19:41.796585 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerStarted","Data":"ae76b2477ff62323ef50d744aeaa351991b2cbf5fbf59b1703275b0293d8f122"} Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.092917 4760 scope.go:117] "RemoveContainer" containerID="84e4c84af1ff2143de51cce41a89dfda5cc1a65931e6b3d93329ca7a543f98e7" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.145966 4760 scope.go:117] "RemoveContainer" containerID="de3b43ba5de05ae071625dc753b0e6fa90712bb4fb5fcaf851c2c4dd803c1010" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.246107 4760 scope.go:117] "RemoveContainer" containerID="eeccdd81ea232e4dcba3aef2b5197d4e6caf13903fd97e8757ca0d0a5fbe2017" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.286302 4760 scope.go:117] "RemoveContainer" containerID="25fbd1192021a95afa834c5f9d67ae402224c1fce9b4fbde8cc6f9cf2cbff83b" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.306456 4760 scope.go:117] "RemoveContainer" containerID="426c0c59a999b5bd2ae19339a27fe525b6c94ec350e061e1f7dafdee3a114a4b" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.345706 4760 scope.go:117] "RemoveContainer" containerID="ff0db16b702c509a9465d5c008cbe9aad0899e81917702f30f5fa2e237c2f394" Jan 21 16:19:42 crc kubenswrapper[4760]: I0121 16:19:42.805771 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerStarted","Data":"8e7f71ae8e5a0a4cdc950144e476299be20ce3d12b2e101795ad35df0cc37e3d"} Jan 21 16:19:50 crc kubenswrapper[4760]: I0121 16:19:50.622679 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:19:50 crc kubenswrapper[4760]: E0121 16:19:50.624674 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:20:02 crc kubenswrapper[4760]: I0121 16:20:02.041994 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" podStartSLOduration=21.514143755 podStartE2EDuration="22.041966285s" podCreationTimestamp="2026-01-21 16:19:40 +0000 UTC" firstStartedPulling="2026-01-21 16:19:41.530218065 +0000 UTC m=+1952.197987643" lastFinishedPulling="2026-01-21 16:19:42.058040595 +0000 UTC m=+1952.725810173" observedRunningTime="2026-01-21 16:19:42.829468158 +0000 UTC m=+1953.497237746" watchObservedRunningTime="2026-01-21 16:20:02.041966285 +0000 UTC m=+1972.709735863" Jan 21 16:20:02 crc kubenswrapper[4760]: I0121 16:20:02.047500 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:20:02 crc kubenswrapper[4760]: I0121 16:20:02.060194 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-vdmds"] Jan 21 16:20:03 crc kubenswrapper[4760]: I0121 16:20:03.627669 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:20:03 crc kubenswrapper[4760]: E0121 16:20:03.627976 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:20:03 crc kubenswrapper[4760]: I0121 16:20:03.640764 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb4a273-5a24-4d7b-b071-53db16ef9f47" path="/var/lib/kubelet/pods/bcb4a273-5a24-4d7b-b071-53db16ef9f47/volumes" Jan 21 16:20:18 crc kubenswrapper[4760]: I0121 16:20:18.646775 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:20:18 crc kubenswrapper[4760]: E0121 16:20:18.647715 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:20:30 crc kubenswrapper[4760]: I0121 16:20:30.623409 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:20:31 crc kubenswrapper[4760]: I0121 16:20:31.201136 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268"} Jan 21 16:20:31 crc kubenswrapper[4760]: I0121 16:20:31.205382 4760 generic.go:334] "Generic (PLEG): container finished" podID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerID="8e7f71ae8e5a0a4cdc950144e476299be20ce3d12b2e101795ad35df0cc37e3d" exitCode=0 Jan 21 16:20:31 crc kubenswrapper[4760]: I0121 16:20:31.205437 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerDied","Data":"8e7f71ae8e5a0a4cdc950144e476299be20ce3d12b2e101795ad35df0cc37e3d"} Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.624109 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.766765 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") pod \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.766896 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") pod \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.767679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") pod \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\" (UID: \"cd8384f1-8b63-421a-b279-ae67ba25c2d2\") " Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.775408 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6" (OuterVolumeSpecName: "kube-api-access-t7gq6") pod "cd8384f1-8b63-421a-b279-ae67ba25c2d2" (UID: "cd8384f1-8b63-421a-b279-ae67ba25c2d2"). InnerVolumeSpecName "kube-api-access-t7gq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.798768 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory" (OuterVolumeSpecName: "inventory") pod "cd8384f1-8b63-421a-b279-ae67ba25c2d2" (UID: "cd8384f1-8b63-421a-b279-ae67ba25c2d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.799863 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cd8384f1-8b63-421a-b279-ae67ba25c2d2" (UID: "cd8384f1-8b63-421a-b279-ae67ba25c2d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.869967 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.870013 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd8384f1-8b63-421a-b279-ae67ba25c2d2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:32 crc kubenswrapper[4760]: I0121 16:20:32.870031 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7gq6\" (UniqueName: \"kubernetes.io/projected/cd8384f1-8b63-421a-b279-ae67ba25c2d2-kube-api-access-t7gq6\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.223014 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" event={"ID":"cd8384f1-8b63-421a-b279-ae67ba25c2d2","Type":"ContainerDied","Data":"ae76b2477ff62323ef50d744aeaa351991b2cbf5fbf59b1703275b0293d8f122"} Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.223491 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae76b2477ff62323ef50d744aeaa351991b2cbf5fbf59b1703275b0293d8f122" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.223136 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.326846 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2hb8p"] Jan 21 16:20:33 crc kubenswrapper[4760]: E0121 16:20:33.327557 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.327587 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.327839 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8384f1-8b63-421a-b279-ae67ba25c2d2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.329057 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.331199 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.331923 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.332123 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.334547 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.336468 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2hb8p"] Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.480377 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.480426 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.480518 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.582680 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.582790 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.582819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.588987 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.590792 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.601762 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"ssh-known-hosts-edpm-deployment-2hb8p\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.691614 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:33 crc kubenswrapper[4760]: I0121 16:20:33.989352 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2hb8p"] Jan 21 16:20:33 crc kubenswrapper[4760]: W0121 16:20:33.993700 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28bf7889_c488_4d87_8b69_e477b27a7909.slice/crio-9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349 WatchSource:0}: Error finding container 9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349: Status 404 returned error can't find the container with id 9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349 Jan 21 16:20:34 crc kubenswrapper[4760]: I0121 16:20:34.232597 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerStarted","Data":"9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349"} Jan 21 16:20:35 crc kubenswrapper[4760]: I0121 16:20:35.242645 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerStarted","Data":"a03304a2814e8ad045e6c51e3176bcc3a342b245348065b131ac4f6470237ebf"} Jan 21 16:20:35 crc kubenswrapper[4760]: I0121 16:20:35.264300 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" podStartSLOduration=1.625498624 podStartE2EDuration="2.264282922s" podCreationTimestamp="2026-01-21 16:20:33 +0000 UTC" firstStartedPulling="2026-01-21 16:20:33.996200675 +0000 UTC m=+2004.663970293" lastFinishedPulling="2026-01-21 16:20:34.634984973 +0000 UTC m=+2005.302754591" observedRunningTime="2026-01-21 16:20:35.259477486 +0000 UTC m=+2005.927247064" watchObservedRunningTime="2026-01-21 16:20:35.264282922 +0000 UTC m=+2005.932052500" Jan 21 16:20:42 crc kubenswrapper[4760]: I0121 16:20:42.417831 4760 generic.go:334] "Generic (PLEG): container finished" podID="28bf7889-c488-4d87-8b69-e477b27a7909" containerID="a03304a2814e8ad045e6c51e3176bcc3a342b245348065b131ac4f6470237ebf" exitCode=0 Jan 21 16:20:42 crc kubenswrapper[4760]: I0121 16:20:42.417932 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerDied","Data":"a03304a2814e8ad045e6c51e3176bcc3a342b245348065b131ac4f6470237ebf"} Jan 21 16:20:42 crc kubenswrapper[4760]: I0121 16:20:42.480880 4760 scope.go:117] "RemoveContainer" containerID="f84fee895d5b623eba76a6d52894ce3241208b6a45938ac93ce1de5aef19d4f7" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.831463 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.851731 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") pod \"28bf7889-c488-4d87-8b69-e477b27a7909\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.890393 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "28bf7889-c488-4d87-8b69-e477b27a7909" (UID: "28bf7889-c488-4d87-8b69-e477b27a7909"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.952751 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") pod \"28bf7889-c488-4d87-8b69-e477b27a7909\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.952840 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") pod \"28bf7889-c488-4d87-8b69-e477b27a7909\" (UID: \"28bf7889-c488-4d87-8b69-e477b27a7909\") " Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.953488 4760 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.957241 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22" (OuterVolumeSpecName: "kube-api-access-6sl22") pod "28bf7889-c488-4d87-8b69-e477b27a7909" (UID: "28bf7889-c488-4d87-8b69-e477b27a7909"). InnerVolumeSpecName "kube-api-access-6sl22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:43 crc kubenswrapper[4760]: I0121 16:20:43.978966 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "28bf7889-c488-4d87-8b69-e477b27a7909" (UID: "28bf7889-c488-4d87-8b69-e477b27a7909"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.055313 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sl22\" (UniqueName: \"kubernetes.io/projected/28bf7889-c488-4d87-8b69-e477b27a7909-kube-api-access-6sl22\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.055413 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/28bf7889-c488-4d87-8b69-e477b27a7909-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.456349 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" event={"ID":"28bf7889-c488-4d87-8b69-e477b27a7909","Type":"ContainerDied","Data":"9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349"} Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.456694 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d708c4cb89667f11c6fa2a63cbbde876a43b760ea1b69127f55925a71eea349" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.456468 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2hb8p" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.535201 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb"] Jan 21 16:20:44 crc kubenswrapper[4760]: E0121 16:20:44.535671 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28bf7889-c488-4d87-8b69-e477b27a7909" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.535689 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="28bf7889-c488-4d87-8b69-e477b27a7909" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.535850 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="28bf7889-c488-4d87-8b69-e477b27a7909" containerName="ssh-known-hosts-edpm-deployment" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.536585 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.542999 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.544692 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.546132 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.546420 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.565152 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb"] Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.566953 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.566991 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.567024 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.668635 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.668819 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.668863 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.673127 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.673712 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.684233 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j5hkb\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:44 crc kubenswrapper[4760]: I0121 16:20:44.863315 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:45 crc kubenswrapper[4760]: I0121 16:20:45.363421 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb"] Jan 21 16:20:45 crc kubenswrapper[4760]: I0121 16:20:45.464968 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerStarted","Data":"b8f9cdfee7fb23cfefeaf7c57a1b6d8c73633b7a19bb2c831caf733b4807bd49"} Jan 21 16:20:46 crc kubenswrapper[4760]: I0121 16:20:46.473910 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerStarted","Data":"0b47221c0f73087b1f08233b1c33a94ef127dcff4bd797f08e1ac0a3f44d1286"} Jan 21 16:20:46 crc kubenswrapper[4760]: I0121 16:20:46.497955 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" podStartSLOduration=2.069860341 podStartE2EDuration="2.497928504s" podCreationTimestamp="2026-01-21 16:20:44 +0000 UTC" firstStartedPulling="2026-01-21 16:20:45.363957686 +0000 UTC m=+2016.031727264" lastFinishedPulling="2026-01-21 16:20:45.792025839 +0000 UTC m=+2016.459795427" observedRunningTime="2026-01-21 16:20:46.488851984 +0000 UTC m=+2017.156621562" watchObservedRunningTime="2026-01-21 16:20:46.497928504 +0000 UTC m=+2017.165698082" Jan 21 16:20:54 crc kubenswrapper[4760]: I0121 16:20:54.540155 4760 generic.go:334] "Generic (PLEG): container finished" podID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerID="0b47221c0f73087b1f08233b1c33a94ef127dcff4bd797f08e1ac0a3f44d1286" exitCode=0 Jan 21 16:20:54 crc kubenswrapper[4760]: I0121 16:20:54.540255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerDied","Data":"0b47221c0f73087b1f08233b1c33a94ef127dcff4bd797f08e1ac0a3f44d1286"} Jan 21 16:20:55 crc kubenswrapper[4760]: I0121 16:20:55.975725 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.090821 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") pod \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.091739 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") pod \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.092095 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") pod \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\" (UID: \"e0d57ee5-e43e-4edf-bbb1-1429b366bfac\") " Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.098520 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq" (OuterVolumeSpecName: "kube-api-access-7g5kq") pod "e0d57ee5-e43e-4edf-bbb1-1429b366bfac" (UID: "e0d57ee5-e43e-4edf-bbb1-1429b366bfac"). InnerVolumeSpecName "kube-api-access-7g5kq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.119451 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory" (OuterVolumeSpecName: "inventory") pod "e0d57ee5-e43e-4edf-bbb1-1429b366bfac" (UID: "e0d57ee5-e43e-4edf-bbb1-1429b366bfac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.123676 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e0d57ee5-e43e-4edf-bbb1-1429b366bfac" (UID: "e0d57ee5-e43e-4edf-bbb1-1429b366bfac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.196738 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.196794 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.196849 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5kq\" (UniqueName: \"kubernetes.io/projected/e0d57ee5-e43e-4edf-bbb1-1429b366bfac-kube-api-access-7g5kq\") on node \"crc\" DevicePath \"\"" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.559255 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" event={"ID":"e0d57ee5-e43e-4edf-bbb1-1429b366bfac","Type":"ContainerDied","Data":"b8f9cdfee7fb23cfefeaf7c57a1b6d8c73633b7a19bb2c831caf733b4807bd49"} Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.559298 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f9cdfee7fb23cfefeaf7c57a1b6d8c73633b7a19bb2c831caf733b4807bd49" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.559402 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j5hkb" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.682805 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq"] Jan 21 16:20:56 crc kubenswrapper[4760]: E0121 16:20:56.683268 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.683289 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.683658 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0d57ee5-e43e-4edf-bbb1-1429b366bfac" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.684369 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.686580 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.686593 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.686959 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.688601 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.709421 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq"] Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.805388 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.805446 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.805523 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.907672 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.907734 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.907840 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.912176 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.912557 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:56 crc kubenswrapper[4760]: I0121 16:20:56.941681 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:57 crc kubenswrapper[4760]: I0121 16:20:57.003286 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:20:57 crc kubenswrapper[4760]: I0121 16:20:57.315524 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq"] Jan 21 16:20:57 crc kubenswrapper[4760]: I0121 16:20:57.568338 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerStarted","Data":"0acd3711d178beec93f0546f442bee61d744a16d3ac19a1f470dbd13526ef729"} Jan 21 16:20:58 crc kubenswrapper[4760]: I0121 16:20:58.579189 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerStarted","Data":"4e3ea4e385e476b9227f6112067fe5d02d19cfdb00255371ecf27edd9c8d2f41"} Jan 21 16:20:58 crc kubenswrapper[4760]: I0121 16:20:58.603680 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" podStartSLOduration=2.146685904 podStartE2EDuration="2.603658727s" podCreationTimestamp="2026-01-21 16:20:56 +0000 UTC" firstStartedPulling="2026-01-21 16:20:57.321911159 +0000 UTC m=+2027.989680737" lastFinishedPulling="2026-01-21 16:20:57.778883982 +0000 UTC m=+2028.446653560" observedRunningTime="2026-01-21 16:20:58.600139762 +0000 UTC m=+2029.267909340" watchObservedRunningTime="2026-01-21 16:20:58.603658727 +0000 UTC m=+2029.271428315" Jan 21 16:21:07 crc kubenswrapper[4760]: I0121 16:21:07.658256 4760 generic.go:334] "Generic (PLEG): container finished" podID="72a45862-35fa-4414-83d0-3e20bf784780" containerID="4e3ea4e385e476b9227f6112067fe5d02d19cfdb00255371ecf27edd9c8d2f41" exitCode=0 Jan 21 16:21:07 crc kubenswrapper[4760]: I0121 16:21:07.658340 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerDied","Data":"4e3ea4e385e476b9227f6112067fe5d02d19cfdb00255371ecf27edd9c8d2f41"} Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.115384 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.168746 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") pod \"72a45862-35fa-4414-83d0-3e20bf784780\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.168832 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") pod \"72a45862-35fa-4414-83d0-3e20bf784780\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.169031 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") pod \"72a45862-35fa-4414-83d0-3e20bf784780\" (UID: \"72a45862-35fa-4414-83d0-3e20bf784780\") " Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.182130 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm" (OuterVolumeSpecName: "kube-api-access-4hjvm") pod "72a45862-35fa-4414-83d0-3e20bf784780" (UID: "72a45862-35fa-4414-83d0-3e20bf784780"). InnerVolumeSpecName "kube-api-access-4hjvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.200971 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "72a45862-35fa-4414-83d0-3e20bf784780" (UID: "72a45862-35fa-4414-83d0-3e20bf784780"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.216614 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory" (OuterVolumeSpecName: "inventory") pod "72a45862-35fa-4414-83d0-3e20bf784780" (UID: "72a45862-35fa-4414-83d0-3e20bf784780"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.271118 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjvm\" (UniqueName: \"kubernetes.io/projected/72a45862-35fa-4414-83d0-3e20bf784780-kube-api-access-4hjvm\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.271163 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.271178 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/72a45862-35fa-4414-83d0-3e20bf784780-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.681613 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" event={"ID":"72a45862-35fa-4414-83d0-3e20bf784780","Type":"ContainerDied","Data":"0acd3711d178beec93f0546f442bee61d744a16d3ac19a1f470dbd13526ef729"} Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.681656 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0acd3711d178beec93f0546f442bee61d744a16d3ac19a1f470dbd13526ef729" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.681717 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.759556 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l"] Jan 21 16:21:09 crc kubenswrapper[4760]: E0121 16:21:09.760904 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a45862-35fa-4414-83d0-3e20bf784780" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.760924 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a45862-35fa-4414-83d0-3e20bf784780" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.761094 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a45862-35fa-4414-83d0-3e20bf784780" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.761789 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.765047 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.765253 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.767072 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.767179 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.767430 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.769659 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.769723 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.769940 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.777535 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l"] Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883617 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883684 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883706 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883729 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883766 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.883937 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884040 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884157 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884355 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884525 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884574 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.884712 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987134 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987255 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987378 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987426 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987481 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987605 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987645 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987694 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987745 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987907 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.987949 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.988008 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.988092 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.992510 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.993112 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.993281 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.994370 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.995237 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.995539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.997258 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.997298 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.998539 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:09 crc kubenswrapper[4760]: I0121 16:21:09.999136 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.000223 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.001137 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.001317 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.007755 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.081811 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:10 crc kubenswrapper[4760]: E0121 16:21:10.328491 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.388232 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l"] Jan 21 16:21:10 crc kubenswrapper[4760]: I0121 16:21:10.690458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerStarted","Data":"e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d"} Jan 21 16:21:11 crc kubenswrapper[4760]: I0121 16:21:11.701797 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerStarted","Data":"a240899e01d40bded79c2811377d17c4ac415853fb0e642a78122569806febac"} Jan 21 16:21:11 crc kubenswrapper[4760]: I0121 16:21:11.727242 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" podStartSLOduration=2.2914362329999998 podStartE2EDuration="2.727220223s" podCreationTimestamp="2026-01-21 16:21:09 +0000 UTC" firstStartedPulling="2026-01-21 16:21:10.396735464 +0000 UTC m=+2041.064505042" lastFinishedPulling="2026-01-21 16:21:10.832519454 +0000 UTC m=+2041.500289032" observedRunningTime="2026-01-21 16:21:11.71967378 +0000 UTC m=+2042.387443358" watchObservedRunningTime="2026-01-21 16:21:11.727220223 +0000 UTC m=+2042.394989801" Jan 21 16:21:20 crc kubenswrapper[4760]: E0121 16:21:20.536436 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:30 crc kubenswrapper[4760]: E0121 16:21:30.748726 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:40 crc kubenswrapper[4760]: E0121 16:21:40.996404 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:49 crc kubenswrapper[4760]: I0121 16:21:49.052458 4760 generic.go:334] "Generic (PLEG): container finished" podID="81b15839-b904-442b-bd7a-f42a043a7be6" containerID="a240899e01d40bded79c2811377d17c4ac415853fb0e642a78122569806febac" exitCode=0 Jan 21 16:21:49 crc kubenswrapper[4760]: I0121 16:21:49.052575 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerDied","Data":"a240899e01d40bded79c2811377d17c4ac415853fb0e642a78122569806febac"} Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.517528 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546694 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546748 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546823 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.546953 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547007 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547032 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547059 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547121 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547161 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547254 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547289 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547461 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.547490 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"81b15839-b904-442b-bd7a-f42a043a7be6\" (UID: \"81b15839-b904-442b-bd7a-f42a043a7be6\") " Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.555207 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.557379 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.558435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.559227 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57" (OuterVolumeSpecName: "kube-api-access-qkb57") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "kube-api-access-qkb57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.559849 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.560164 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.560566 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.562203 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.562187 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.564165 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.568084 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.569499 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.592481 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory" (OuterVolumeSpecName: "inventory") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.593890 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "81b15839-b904-442b-bd7a-f42a043a7be6" (UID: "81b15839-b904-442b-bd7a-f42a043a7be6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649416 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649449 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649460 4760 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649470 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649480 4760 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649490 4760 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649501 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649511 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649525 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649536 4760 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649545 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649554 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkb57\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-kube-api-access-qkb57\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649565 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81b15839-b904-442b-bd7a-f42a043a7be6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:50 crc kubenswrapper[4760]: I0121 16:21:50.649576 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/81b15839-b904-442b-bd7a-f42a043a7be6-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.081627 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" event={"ID":"81b15839-b904-442b-bd7a-f42a043a7be6","Type":"ContainerDied","Data":"e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d"} Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.081695 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.081803 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.239857 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf"] Jan 21 16:21:51 crc kubenswrapper[4760]: E0121 16:21:51.241045 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81b15839-b904-442b-bd7a-f42a043a7be6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.241076 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="81b15839-b904-442b-bd7a-f42a043a7be6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.241568 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="81b15839-b904-442b-bd7a-f42a043a7be6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.242828 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.246874 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.247285 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.247335 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.247414 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.248263 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.269040 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf"] Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.273887 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274018 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274167 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274278 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.274399 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: E0121 16:21:51.314192 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b15839_b904_442b_bd7a_f42a043a7be6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81b15839_b904_442b_bd7a_f42a043a7be6.slice/crio-e5dfcbe686289752957d0df212a5f567ac55248bb7e29ae4df8dc013c500ea4d\": RecentStats: unable to find data in memory cache]" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.377261 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.377995 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.378219 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.378470 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.378700 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.380426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.386426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.386426 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.387145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.400889 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-pv9jf\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:51 crc kubenswrapper[4760]: I0121 16:21:51.574185 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:21:52 crc kubenswrapper[4760]: I0121 16:21:52.216188 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf"] Jan 21 16:21:53 crc kubenswrapper[4760]: I0121 16:21:53.108273 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerStarted","Data":"38d34cfceb4f7575f8c0d020153da49d5ea238e2ee2a703dc072f1d628ed4a69"} Jan 21 16:21:57 crc kubenswrapper[4760]: I0121 16:21:57.147251 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerStarted","Data":"3941881cb2b8f10ca6f8929a28dbb868b1e7fbd813e2dad26d8602b006eb10d2"} Jan 21 16:21:57 crc kubenswrapper[4760]: I0121 16:21:57.169193 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" podStartSLOduration=1.731915153 podStartE2EDuration="6.169173317s" podCreationTimestamp="2026-01-21 16:21:51 +0000 UTC" firstStartedPulling="2026-01-21 16:21:52.231141312 +0000 UTC m=+2082.898910900" lastFinishedPulling="2026-01-21 16:21:56.668399486 +0000 UTC m=+2087.336169064" observedRunningTime="2026-01-21 16:21:57.162539146 +0000 UTC m=+2087.830308724" watchObservedRunningTime="2026-01-21 16:21:57.169173317 +0000 UTC m=+2087.836942895" Jan 21 16:22:01 crc kubenswrapper[4760]: E0121 16:22:01.591000 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72a45862_35fa_4414_83d0_3e20bf784780.slice\": RecentStats: unable to find data in memory cache]" Jan 21 16:22:50 crc kubenswrapper[4760]: I0121 16:22:50.946501 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:22:50 crc kubenswrapper[4760]: I0121 16:22:50.947176 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:22:59 crc kubenswrapper[4760]: I0121 16:22:59.785635 4760 generic.go:334] "Generic (PLEG): container finished" podID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerID="3941881cb2b8f10ca6f8929a28dbb868b1e7fbd813e2dad26d8602b006eb10d2" exitCode=0 Jan 21 16:22:59 crc kubenswrapper[4760]: I0121 16:22:59.786249 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerDied","Data":"3941881cb2b8f10ca6f8929a28dbb868b1e7fbd813e2dad26d8602b006eb10d2"} Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.254467 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.400856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.400967 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.401019 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.401155 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.401203 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.407940 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl" (OuterVolumeSpecName: "kube-api-access-9x2nl") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "kube-api-access-9x2nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.408606 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.430304 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: E0121 16:23:01.449343 4760 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory podName:fee344d1-5ba0-4b85-85bf-8133d451624e nodeName:}" failed. No retries permitted until 2026-01-21 16:23:01.949295178 +0000 UTC m=+2152.617064756 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e") : error deleting /var/lib/kubelet/pods/fee344d1-5ba0-4b85-85bf-8133d451624e/volume-subpaths: remove /var/lib/kubelet/pods/fee344d1-5ba0-4b85-85bf-8133d451624e/volume-subpaths: no such file or directory Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.454194 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504499 4760 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/fee344d1-5ba0-4b85-85bf-8133d451624e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504540 4760 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504559 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.504604 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2nl\" (UniqueName: \"kubernetes.io/projected/fee344d1-5ba0-4b85-85bf-8133d451624e-kube-api-access-9x2nl\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.808954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" event={"ID":"fee344d1-5ba0-4b85-85bf-8133d451624e","Type":"ContainerDied","Data":"38d34cfceb4f7575f8c0d020153da49d5ea238e2ee2a703dc072f1d628ed4a69"} Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.809015 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d34cfceb4f7575f8c0d020153da49d5ea238e2ee2a703dc072f1d628ed4a69" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.809180 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-pv9jf" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.898121 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2"] Jan 21 16:23:01 crc kubenswrapper[4760]: E0121 16:23:01.898693 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.898712 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.899087 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee344d1-5ba0-4b85-85bf-8133d451624e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.899978 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.901913 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.903417 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.912405 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2"] Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914574 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914720 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914783 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.914946 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:01 crc kubenswrapper[4760]: I0121 16:23:01.915029 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016208 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") pod \"fee344d1-5ba0-4b85-85bf-8133d451624e\" (UID: \"fee344d1-5ba0-4b85-85bf-8133d451624e\") " Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016383 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016422 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016467 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016527 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016554 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.016587 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022212 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022461 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022644 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.022721 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory" (OuterVolumeSpecName: "inventory") pod "fee344d1-5ba0-4b85-85bf-8133d451624e" (UID: "fee344d1-5ba0-4b85-85bf-8133d451624e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.023008 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.023192 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.034496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.117859 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee344d1-5ba0-4b85-85bf-8133d451624e-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.221900 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.757392 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2"] Jan 21 16:23:02 crc kubenswrapper[4760]: I0121 16:23:02.818205 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerStarted","Data":"5250118359e1dde01ba84f1d9d0ff15035dbaf36ff2ab3ffab4d63dd18e9fb28"} Jan 21 16:23:03 crc kubenswrapper[4760]: I0121 16:23:03.828806 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerStarted","Data":"b9a400a79fbfddca4bb0e9307b7dad52157ebeb6a305e024767e7e191077289e"} Jan 21 16:23:04 crc kubenswrapper[4760]: I0121 16:23:04.854389 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" podStartSLOduration=3.2493747060000002 podStartE2EDuration="3.854365367s" podCreationTimestamp="2026-01-21 16:23:01 +0000 UTC" firstStartedPulling="2026-01-21 16:23:02.76765926 +0000 UTC m=+2153.435428838" lastFinishedPulling="2026-01-21 16:23:03.372649921 +0000 UTC m=+2154.040419499" observedRunningTime="2026-01-21 16:23:04.849716363 +0000 UTC m=+2155.517485941" watchObservedRunningTime="2026-01-21 16:23:04.854365367 +0000 UTC m=+2155.522134965" Jan 21 16:23:20 crc kubenswrapper[4760]: I0121 16:23:20.945726 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:23:20 crc kubenswrapper[4760]: I0121 16:23:20.947452 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.946164 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.946838 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.946888 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.947635 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:23:50 crc kubenswrapper[4760]: I0121 16:23:50.947688 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268" gracePeriod=600 Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.237819 4760 generic.go:334] "Generic (PLEG): container finished" podID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerID="b9a400a79fbfddca4bb0e9307b7dad52157ebeb6a305e024767e7e191077289e" exitCode=0 Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.238217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerDied","Data":"b9a400a79fbfddca4bb0e9307b7dad52157ebeb6a305e024767e7e191077289e"} Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.242440 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268" exitCode=0 Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.242572 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268"} Jan 21 16:23:51 crc kubenswrapper[4760]: I0121 16:23:51.242825 4760 scope.go:117] "RemoveContainer" containerID="3397077c04f1562ba759efe22bb62c3f46a49e2e5a066d18955c7b87afdb4965" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.254914 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3"} Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.694943 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838698 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838820 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838903 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.838988 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.839015 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.839044 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") pod \"93a8f498-bf0c-43f6-aad8-e26843ca3295\" (UID: \"93a8f498-bf0c-43f6-aad8-e26843ca3295\") " Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.845419 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.846144 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8" (OuterVolumeSpecName: "kube-api-access-5r2g8") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "kube-api-access-5r2g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.865689 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.871133 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory" (OuterVolumeSpecName: "inventory") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.871892 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.876334 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "93a8f498-bf0c-43f6-aad8-e26843ca3295" (UID: "93a8f498-bf0c-43f6-aad8-e26843ca3295"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941908 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941959 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941976 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r2g8\" (UniqueName: \"kubernetes.io/projected/93a8f498-bf0c-43f6-aad8-e26843ca3295-kube-api-access-5r2g8\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.941993 4760 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.942011 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:52 crc kubenswrapper[4760]: I0121 16:23:52.942023 4760 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/93a8f498-bf0c-43f6-aad8-e26843ca3295-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.263029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" event={"ID":"93a8f498-bf0c-43f6-aad8-e26843ca3295","Type":"ContainerDied","Data":"5250118359e1dde01ba84f1d9d0ff15035dbaf36ff2ab3ffab4d63dd18e9fb28"} Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.263577 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5250118359e1dde01ba84f1d9d0ff15035dbaf36ff2ab3ffab4d63dd18e9fb28" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.263051 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.370524 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958"] Jan 21 16:23:53 crc kubenswrapper[4760]: E0121 16:23:53.370953 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.370969 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.371130 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="93a8f498-bf0c-43f6-aad8-e26843ca3295" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.371787 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376130 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376421 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376592 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376719 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.376804 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.398662 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958"] Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.451839 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.451909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.451947 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.452025 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.452108 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554109 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554209 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554243 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554274 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.554387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.560649 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.560723 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.561173 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.561672 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.576496 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6f958\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.687711 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.990035 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958"] Jan 21 16:23:53 crc kubenswrapper[4760]: W0121 16:23:53.994371 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60b03623_4db5_445f_89b4_61f39ac04dc2.slice/crio-5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9 WatchSource:0}: Error finding container 5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9: Status 404 returned error can't find the container with id 5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9 Jan 21 16:23:53 crc kubenswrapper[4760]: I0121 16:23:53.997228 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:23:54 crc kubenswrapper[4760]: I0121 16:23:54.271672 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerStarted","Data":"5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9"} Jan 21 16:23:55 crc kubenswrapper[4760]: I0121 16:23:55.283016 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerStarted","Data":"0a10d6eba529254935110a96670ecf47a8c0c66342fb33262ccde3794d781379"} Jan 21 16:23:55 crc kubenswrapper[4760]: I0121 16:23:55.302689 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" podStartSLOduration=1.673653482 podStartE2EDuration="2.30267194s" podCreationTimestamp="2026-01-21 16:23:53 +0000 UTC" firstStartedPulling="2026-01-21 16:23:53.996978138 +0000 UTC m=+2204.664747716" lastFinishedPulling="2026-01-21 16:23:54.625996576 +0000 UTC m=+2205.293766174" observedRunningTime="2026-01-21 16:23:55.299146694 +0000 UTC m=+2205.966916272" watchObservedRunningTime="2026-01-21 16:23:55.30267194 +0000 UTC m=+2205.970441518" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.270086 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.273065 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.296758 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.428209 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.428342 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.428538 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.530894 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.531665 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.531901 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.532268 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.532387 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.557823 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"community-operators-v6nms\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:57 crc kubenswrapper[4760]: I0121 16:23:57.599786 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:23:58 crc kubenswrapper[4760]: I0121 16:23:58.139156 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:23:58 crc kubenswrapper[4760]: W0121 16:23:58.163474 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c7a437f_1774_4ac1_ac7c_ca3972a52909.slice/crio-548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5 WatchSource:0}: Error finding container 548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5: Status 404 returned error can't find the container with id 548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5 Jan 21 16:23:58 crc kubenswrapper[4760]: I0121 16:23:58.312235 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerStarted","Data":"548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5"} Jan 21 16:23:59 crc kubenswrapper[4760]: I0121 16:23:59.349575 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" exitCode=0 Jan 21 16:23:59 crc kubenswrapper[4760]: I0121 16:23:59.349826 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52"} Jan 21 16:24:00 crc kubenswrapper[4760]: I0121 16:24:00.359937 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerStarted","Data":"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17"} Jan 21 16:24:01 crc kubenswrapper[4760]: I0121 16:24:01.369034 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" exitCode=0 Jan 21 16:24:01 crc kubenswrapper[4760]: I0121 16:24:01.369116 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17"} Jan 21 16:24:02 crc kubenswrapper[4760]: I0121 16:24:02.383533 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerStarted","Data":"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557"} Jan 21 16:24:02 crc kubenswrapper[4760]: I0121 16:24:02.409438 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v6nms" podStartSLOduration=2.869911423 podStartE2EDuration="5.40942177s" podCreationTimestamp="2026-01-21 16:23:57 +0000 UTC" firstStartedPulling="2026-01-21 16:23:59.351686373 +0000 UTC m=+2210.019455951" lastFinishedPulling="2026-01-21 16:24:01.89119672 +0000 UTC m=+2212.558966298" observedRunningTime="2026-01-21 16:24:02.404136501 +0000 UTC m=+2213.071906079" watchObservedRunningTime="2026-01-21 16:24:02.40942177 +0000 UTC m=+2213.077191348" Jan 21 16:24:07 crc kubenswrapper[4760]: I0121 16:24:07.600891 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:07 crc kubenswrapper[4760]: I0121 16:24:07.601602 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:07 crc kubenswrapper[4760]: I0121 16:24:07.649976 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:08 crc kubenswrapper[4760]: I0121 16:24:08.490868 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:08 crc kubenswrapper[4760]: I0121 16:24:08.833752 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.459976 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v6nms" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" containerID="cri-o://63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" gracePeriod=2 Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.945704 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.997504 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") pod \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.997593 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") pod \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.997633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") pod \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\" (UID: \"0c7a437f-1774-4ac1-ac7c-ca3972a52909\") " Jan 21 16:24:10 crc kubenswrapper[4760]: I0121 16:24:10.999037 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities" (OuterVolumeSpecName: "utilities") pod "0c7a437f-1774-4ac1-ac7c-ca3972a52909" (UID: "0c7a437f-1774-4ac1-ac7c-ca3972a52909"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.006294 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc" (OuterVolumeSpecName: "kube-api-access-2f5bc") pod "0c7a437f-1774-4ac1-ac7c-ca3972a52909" (UID: "0c7a437f-1774-4ac1-ac7c-ca3972a52909"). InnerVolumeSpecName "kube-api-access-2f5bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.057945 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c7a437f-1774-4ac1-ac7c-ca3972a52909" (UID: "0c7a437f-1774-4ac1-ac7c-ca3972a52909"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.100671 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f5bc\" (UniqueName: \"kubernetes.io/projected/0c7a437f-1774-4ac1-ac7c-ca3972a52909-kube-api-access-2f5bc\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.100727 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.100744 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a437f-1774-4ac1-ac7c-ca3972a52909-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470725 4760 generic.go:334] "Generic (PLEG): container finished" podID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" exitCode=0 Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470766 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557"} Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v6nms" event={"ID":"0c7a437f-1774-4ac1-ac7c-ca3972a52909","Type":"ContainerDied","Data":"548f8ebc64c299074d6218cd513d936626c3b066aef7ede8f9b36ccb0a5f05e5"} Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470799 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v6nms" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.470809 4760 scope.go:117] "RemoveContainer" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.492506 4760 scope.go:117] "RemoveContainer" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.565888 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.586698 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v6nms"] Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.588954 4760 scope.go:117] "RemoveContainer" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.620263 4760 scope.go:117] "RemoveContainer" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" Jan 21 16:24:11 crc kubenswrapper[4760]: E0121 16:24:11.620583 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557\": container with ID starting with 63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557 not found: ID does not exist" containerID="63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.620613 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557"} err="failed to get container status \"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557\": rpc error: code = NotFound desc = could not find container \"63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557\": container with ID starting with 63bf0a4ff10935e3faf271693e54041d2103cd5ccb917ef167c47926dda6d557 not found: ID does not exist" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.620633 4760 scope.go:117] "RemoveContainer" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" Jan 21 16:24:11 crc kubenswrapper[4760]: E0121 16:24:11.621140 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17\": container with ID starting with fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17 not found: ID does not exist" containerID="fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.621168 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17"} err="failed to get container status \"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17\": rpc error: code = NotFound desc = could not find container \"fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17\": container with ID starting with fbe7e1317b27c3912ad3164c35ad5f0b9b48007188364d1b999a7f46a610cc17 not found: ID does not exist" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.621190 4760 scope.go:117] "RemoveContainer" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" Jan 21 16:24:11 crc kubenswrapper[4760]: E0121 16:24:11.621530 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52\": container with ID starting with 07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52 not found: ID does not exist" containerID="07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.621602 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52"} err="failed to get container status \"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52\": rpc error: code = NotFound desc = could not find container \"07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52\": container with ID starting with 07f9b6ae96554e9613dfd8a778e773348c966b2ec953a3f96f61db7e3f710b52 not found: ID does not exist" Jan 21 16:24:11 crc kubenswrapper[4760]: I0121 16:24:11.635163 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" path="/var/lib/kubelet/pods/0c7a437f-1774-4ac1-ac7c-ca3972a52909/volumes" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.042981 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:13 crc kubenswrapper[4760]: E0121 16:24:13.043642 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-content" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043657 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-content" Jan 21 16:24:13 crc kubenswrapper[4760]: E0121 16:24:13.043679 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-utilities" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043685 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="extract-utilities" Jan 21 16:24:13 crc kubenswrapper[4760]: E0121 16:24:13.043698 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043704 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.043872 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7a437f-1774-4ac1-ac7c-ca3972a52909" containerName="registry-server" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.045133 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.058394 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.202910 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.202968 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.203000 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.304930 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.304999 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.305031 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.305811 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.306000 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.329721 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"redhat-operators-cc8zr\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.365864 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:13 crc kubenswrapper[4760]: I0121 16:24:13.842244 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:13 crc kubenswrapper[4760]: W0121 16:24:13.844626 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b839040_a339_4196_bb4d_9cff91b973cf.slice/crio-673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0 WatchSource:0}: Error finding container 673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0: Status 404 returned error can't find the container with id 673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0 Jan 21 16:24:14 crc kubenswrapper[4760]: I0121 16:24:14.502444 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b839040-a339-4196-bb4d-9cff91b973cf" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" exitCode=0 Jan 21 16:24:14 crc kubenswrapper[4760]: I0121 16:24:14.502488 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b"} Jan 21 16:24:14 crc kubenswrapper[4760]: I0121 16:24:14.502734 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerStarted","Data":"673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0"} Jan 21 16:24:16 crc kubenswrapper[4760]: I0121 16:24:16.526969 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerStarted","Data":"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995"} Jan 21 16:24:19 crc kubenswrapper[4760]: I0121 16:24:19.554589 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b839040-a339-4196-bb4d-9cff91b973cf" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" exitCode=0 Jan 21 16:24:19 crc kubenswrapper[4760]: I0121 16:24:19.554663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995"} Jan 21 16:24:21 crc kubenswrapper[4760]: I0121 16:24:21.579088 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerStarted","Data":"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c"} Jan 21 16:24:21 crc kubenswrapper[4760]: I0121 16:24:21.607644 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cc8zr" podStartSLOduration=2.394836806 podStartE2EDuration="8.607607919s" podCreationTimestamp="2026-01-21 16:24:13 +0000 UTC" firstStartedPulling="2026-01-21 16:24:14.504111769 +0000 UTC m=+2225.171881347" lastFinishedPulling="2026-01-21 16:24:20.716882882 +0000 UTC m=+2231.384652460" observedRunningTime="2026-01-21 16:24:21.603977521 +0000 UTC m=+2232.271747139" watchObservedRunningTime="2026-01-21 16:24:21.607607919 +0000 UTC m=+2232.275377497" Jan 21 16:24:23 crc kubenswrapper[4760]: I0121 16:24:23.366272 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:23 crc kubenswrapper[4760]: I0121 16:24:23.366576 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:24 crc kubenswrapper[4760]: I0121 16:24:24.407546 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cc8zr" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" probeResult="failure" output=< Jan 21 16:24:24 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:24:24 crc kubenswrapper[4760]: > Jan 21 16:24:33 crc kubenswrapper[4760]: I0121 16:24:33.412066 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:33 crc kubenswrapper[4760]: I0121 16:24:33.460764 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:33 crc kubenswrapper[4760]: I0121 16:24:33.647105 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:34 crc kubenswrapper[4760]: I0121 16:24:34.689780 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cc8zr" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" containerID="cri-o://f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" gracePeriod=2 Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.159552 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.184577 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") pod \"2b839040-a339-4196-bb4d-9cff91b973cf\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.184723 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") pod \"2b839040-a339-4196-bb4d-9cff91b973cf\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.185252 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") pod \"2b839040-a339-4196-bb4d-9cff91b973cf\" (UID: \"2b839040-a339-4196-bb4d-9cff91b973cf\") " Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.186569 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities" (OuterVolumeSpecName: "utilities") pod "2b839040-a339-4196-bb4d-9cff91b973cf" (UID: "2b839040-a339-4196-bb4d-9cff91b973cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.194538 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd" (OuterVolumeSpecName: "kube-api-access-t6ctd") pod "2b839040-a339-4196-bb4d-9cff91b973cf" (UID: "2b839040-a339-4196-bb4d-9cff91b973cf"). InnerVolumeSpecName "kube-api-access-t6ctd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.287377 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.287411 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6ctd\" (UniqueName: \"kubernetes.io/projected/2b839040-a339-4196-bb4d-9cff91b973cf-kube-api-access-t6ctd\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.312539 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b839040-a339-4196-bb4d-9cff91b973cf" (UID: "2b839040-a339-4196-bb4d-9cff91b973cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.388634 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b839040-a339-4196-bb4d-9cff91b973cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701714 4760 generic.go:334] "Generic (PLEG): container finished" podID="2b839040-a339-4196-bb4d-9cff91b973cf" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" exitCode=0 Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701760 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c"} Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701796 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cc8zr" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701816 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cc8zr" event={"ID":"2b839040-a339-4196-bb4d-9cff91b973cf","Type":"ContainerDied","Data":"673a066d310b25f596aea8431ab043b7e33476ebb8a44b0f967a97be2599faf0"} Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.701841 4760 scope.go:117] "RemoveContainer" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.729009 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.732317 4760 scope.go:117] "RemoveContainer" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.736064 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cc8zr"] Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.767067 4760 scope.go:117] "RemoveContainer" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.816278 4760 scope.go:117] "RemoveContainer" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" Jan 21 16:24:35 crc kubenswrapper[4760]: E0121 16:24:35.817059 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c\": container with ID starting with f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c not found: ID does not exist" containerID="f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817101 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c"} err="failed to get container status \"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c\": rpc error: code = NotFound desc = could not find container \"f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c\": container with ID starting with f29ebd3c7656bf8ab7214d39cef7474aefdc68f2ff125c58a7fa2a5271144a3c not found: ID does not exist" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817128 4760 scope.go:117] "RemoveContainer" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" Jan 21 16:24:35 crc kubenswrapper[4760]: E0121 16:24:35.817381 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995\": container with ID starting with 63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995 not found: ID does not exist" containerID="63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817402 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995"} err="failed to get container status \"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995\": rpc error: code = NotFound desc = could not find container \"63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995\": container with ID starting with 63bf09003b484cc79872620ed0e41974d6406cc61e2f94689ea2405be020e995 not found: ID does not exist" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817414 4760 scope.go:117] "RemoveContainer" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" Jan 21 16:24:35 crc kubenswrapper[4760]: E0121 16:24:35.817650 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b\": container with ID starting with c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b not found: ID does not exist" containerID="c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b" Jan 21 16:24:35 crc kubenswrapper[4760]: I0121 16:24:35.817669 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b"} err="failed to get container status \"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b\": rpc error: code = NotFound desc = could not find container \"c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b\": container with ID starting with c8f6e05fb354e36521d754752235c694ce04588330cc86112e41097350f13c7b not found: ID does not exist" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.869991 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:36 crc kubenswrapper[4760]: E0121 16:24:36.870591 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-content" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.870612 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-content" Jan 21 16:24:36 crc kubenswrapper[4760]: E0121 16:24:36.870633 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-utilities" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.870645 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="extract-utilities" Jan 21 16:24:36 crc kubenswrapper[4760]: E0121 16:24:36.870670 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.870680 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.871094 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" containerName="registry-server" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.873125 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.880190 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.919415 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.919518 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:36 crc kubenswrapper[4760]: I0121 16:24:36.919594 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.021786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.022538 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.022806 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.023152 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.023284 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.039966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"certified-operators-4dzt8\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.193863 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.515779 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.640456 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b839040-a339-4196-bb4d-9cff91b973cf" path="/var/lib/kubelet/pods/2b839040-a339-4196-bb4d-9cff91b973cf/volumes" Jan 21 16:24:37 crc kubenswrapper[4760]: I0121 16:24:37.723691 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerStarted","Data":"e195b7b5ba7462f91def02e55075bb08ba38af54a953383f2c261e34f0b33455"} Jan 21 16:24:38 crc kubenswrapper[4760]: I0121 16:24:38.734924 4760 generic.go:334] "Generic (PLEG): container finished" podID="023aac49-e118-4724-a85d-897e697b089d" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" exitCode=0 Jan 21 16:24:38 crc kubenswrapper[4760]: I0121 16:24:38.735000 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3"} Jan 21 16:24:39 crc kubenswrapper[4760]: I0121 16:24:39.745357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerStarted","Data":"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705"} Jan 21 16:24:40 crc kubenswrapper[4760]: I0121 16:24:40.762095 4760 generic.go:334] "Generic (PLEG): container finished" podID="023aac49-e118-4724-a85d-897e697b089d" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" exitCode=0 Jan 21 16:24:40 crc kubenswrapper[4760]: I0121 16:24:40.762141 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705"} Jan 21 16:24:41 crc kubenswrapper[4760]: I0121 16:24:41.772882 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerStarted","Data":"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c"} Jan 21 16:24:41 crc kubenswrapper[4760]: I0121 16:24:41.796659 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4dzt8" podStartSLOduration=3.040814187 podStartE2EDuration="5.796634043s" podCreationTimestamp="2026-01-21 16:24:36 +0000 UTC" firstStartedPulling="2026-01-21 16:24:38.73703335 +0000 UTC m=+2249.404802928" lastFinishedPulling="2026-01-21 16:24:41.492853216 +0000 UTC m=+2252.160622784" observedRunningTime="2026-01-21 16:24:41.792190185 +0000 UTC m=+2252.459959763" watchObservedRunningTime="2026-01-21 16:24:41.796634043 +0000 UTC m=+2252.464403611" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.193993 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.194698 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.241436 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.876778 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:47 crc kubenswrapper[4760]: I0121 16:24:47.926702 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:49 crc kubenswrapper[4760]: I0121 16:24:49.853588 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4dzt8" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" containerID="cri-o://7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" gracePeriod=2 Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.304098 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.388202 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") pod \"023aac49-e118-4724-a85d-897e697b089d\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.388484 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") pod \"023aac49-e118-4724-a85d-897e697b089d\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.388553 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") pod \"023aac49-e118-4724-a85d-897e697b089d\" (UID: \"023aac49-e118-4724-a85d-897e697b089d\") " Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.389434 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities" (OuterVolumeSpecName: "utilities") pod "023aac49-e118-4724-a85d-897e697b089d" (UID: "023aac49-e118-4724-a85d-897e697b089d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.395286 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9" (OuterVolumeSpecName: "kube-api-access-6xld9") pod "023aac49-e118-4724-a85d-897e697b089d" (UID: "023aac49-e118-4724-a85d-897e697b089d"). InnerVolumeSpecName "kube-api-access-6xld9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.491814 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.491903 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xld9\" (UniqueName: \"kubernetes.io/projected/023aac49-e118-4724-a85d-897e697b089d-kube-api-access-6xld9\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.694828 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "023aac49-e118-4724-a85d-897e697b089d" (UID: "023aac49-e118-4724-a85d-897e697b089d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.696894 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/023aac49-e118-4724-a85d-897e697b089d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.865901 4760 generic.go:334] "Generic (PLEG): container finished" podID="023aac49-e118-4724-a85d-897e697b089d" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" exitCode=0 Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.865979 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4dzt8" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.865981 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c"} Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.866070 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4dzt8" event={"ID":"023aac49-e118-4724-a85d-897e697b089d","Type":"ContainerDied","Data":"e195b7b5ba7462f91def02e55075bb08ba38af54a953383f2c261e34f0b33455"} Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.866114 4760 scope.go:117] "RemoveContainer" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901355 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.901745 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-content" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901761 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-content" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.901780 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901786 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.901812 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-utilities" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901819 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="extract-utilities" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.901983 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="023aac49-e118-4724-a85d-897e697b089d" containerName="registry-server" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.903241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.912171 4760 scope.go:117] "RemoveContainer" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.923551 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.941871 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.955415 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4dzt8"] Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.964231 4760 scope.go:117] "RemoveContainer" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.998574 4760 scope.go:117] "RemoveContainer" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.998905 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c\": container with ID starting with 7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c not found: ID does not exist" containerID="7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.998942 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c"} err="failed to get container status \"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c\": rpc error: code = NotFound desc = could not find container \"7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c\": container with ID starting with 7d466aad70a041fd9dce823b015c78fe7a98e78d2883e1cf81958a9cdfc6bd3c not found: ID does not exist" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.998963 4760 scope.go:117] "RemoveContainer" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.999435 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705\": container with ID starting with afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705 not found: ID does not exist" containerID="afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.999461 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705"} err="failed to get container status \"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705\": rpc error: code = NotFound desc = could not find container \"afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705\": container with ID starting with afde7675b50d0ca8e391b3a16f7a2be95a0d95dad1eb5a65d2823df08a4c1705 not found: ID does not exist" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.999520 4760 scope.go:117] "RemoveContainer" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" Jan 21 16:24:50 crc kubenswrapper[4760]: E0121 16:24:50.999920 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3\": container with ID starting with 4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3 not found: ID does not exist" containerID="4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3" Jan 21 16:24:50 crc kubenswrapper[4760]: I0121 16:24:50.999957 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3"} err="failed to get container status \"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3\": rpc error: code = NotFound desc = could not find container \"4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3\": container with ID starting with 4e241a18697d5c5757311ea051e76f24a3245f652188d4603c5e1b18a9c4f6a3 not found: ID does not exist" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.004032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.004063 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.004124 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.106509 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.106578 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.106673 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.107439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.107538 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.131140 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"redhat-marketplace-fnq74\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.219852 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.637993 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023aac49-e118-4724-a85d-897e697b089d" path="/var/lib/kubelet/pods/023aac49-e118-4724-a85d-897e697b089d/volumes" Jan 21 16:24:51 crc kubenswrapper[4760]: W0121 16:24:51.698502 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6ad6c06_5c94_4c4f_a0ea_577481974f45.slice/crio-9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde WatchSource:0}: Error finding container 9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde: Status 404 returned error can't find the container with id 9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.700977 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.879180 4760 generic.go:334] "Generic (PLEG): container finished" podID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" exitCode=0 Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.879224 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862"} Jan 21 16:24:51 crc kubenswrapper[4760]: I0121 16:24:51.879259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerStarted","Data":"9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde"} Jan 21 16:24:52 crc kubenswrapper[4760]: I0121 16:24:52.890660 4760 generic.go:334] "Generic (PLEG): container finished" podID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" exitCode=0 Jan 21 16:24:52 crc kubenswrapper[4760]: I0121 16:24:52.890731 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3"} Jan 21 16:24:53 crc kubenswrapper[4760]: I0121 16:24:53.904381 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerStarted","Data":"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f"} Jan 21 16:24:53 crc kubenswrapper[4760]: I0121 16:24:53.930413 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fnq74" podStartSLOduration=2.445458962 podStartE2EDuration="3.930392797s" podCreationTimestamp="2026-01-21 16:24:50 +0000 UTC" firstStartedPulling="2026-01-21 16:24:51.881138005 +0000 UTC m=+2262.548907583" lastFinishedPulling="2026-01-21 16:24:53.36607184 +0000 UTC m=+2264.033841418" observedRunningTime="2026-01-21 16:24:53.923896869 +0000 UTC m=+2264.591666457" watchObservedRunningTime="2026-01-21 16:24:53.930392797 +0000 UTC m=+2264.598162365" Jan 21 16:25:01 crc kubenswrapper[4760]: I0121 16:25:01.220163 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:01 crc kubenswrapper[4760]: I0121 16:25:01.221313 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:01 crc kubenswrapper[4760]: I0121 16:25:01.278964 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:02 crc kubenswrapper[4760]: I0121 16:25:02.018119 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:02 crc kubenswrapper[4760]: I0121 16:25:02.075061 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:25:03 crc kubenswrapper[4760]: I0121 16:25:03.991211 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fnq74" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" containerID="cri-o://176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" gracePeriod=2 Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.450953 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.465781 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") pod \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.465911 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") pod \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.482732 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f" (OuterVolumeSpecName: "kube-api-access-9447f") pod "a6ad6c06-5c94-4c4f-a0ea-577481974f45" (UID: "a6ad6c06-5c94-4c4f-a0ea-577481974f45"). InnerVolumeSpecName "kube-api-access-9447f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.482881 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") pod \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\" (UID: \"a6ad6c06-5c94-4c4f-a0ea-577481974f45\") " Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.484013 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9447f\" (UniqueName: \"kubernetes.io/projected/a6ad6c06-5c94-4c4f-a0ea-577481974f45-kube-api-access-9447f\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.484435 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities" (OuterVolumeSpecName: "utilities") pod "a6ad6c06-5c94-4c4f-a0ea-577481974f45" (UID: "a6ad6c06-5c94-4c4f-a0ea-577481974f45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.511456 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6ad6c06-5c94-4c4f-a0ea-577481974f45" (UID: "a6ad6c06-5c94-4c4f-a0ea-577481974f45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.586172 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:04 crc kubenswrapper[4760]: I0121 16:25:04.586203 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6ad6c06-5c94-4c4f-a0ea-577481974f45-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.002424 4760 generic.go:334] "Generic (PLEG): container finished" podID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" exitCode=0 Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.002582 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f"} Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.003992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fnq74" event={"ID":"a6ad6c06-5c94-4c4f-a0ea-577481974f45","Type":"ContainerDied","Data":"9c834efed315b9a99a7958b031cc4168032b84408d58f7993aa130230fa38bde"} Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.002639 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fnq74" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.004132 4760 scope.go:117] "RemoveContainer" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.022239 4760 scope.go:117] "RemoveContainer" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.040791 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.049077 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fnq74"] Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.057571 4760 scope.go:117] "RemoveContainer" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.146680 4760 scope.go:117] "RemoveContainer" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" Jan 21 16:25:05 crc kubenswrapper[4760]: E0121 16:25:05.147032 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f\": container with ID starting with 176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f not found: ID does not exist" containerID="176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147065 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f"} err="failed to get container status \"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f\": rpc error: code = NotFound desc = could not find container \"176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f\": container with ID starting with 176103a3f5de724fe24636e85e452a0cf39d6935c30051e2fb3619a5c091706f not found: ID does not exist" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147087 4760 scope.go:117] "RemoveContainer" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" Jan 21 16:25:05 crc kubenswrapper[4760]: E0121 16:25:05.147269 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3\": container with ID starting with f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3 not found: ID does not exist" containerID="f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147293 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3"} err="failed to get container status \"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3\": rpc error: code = NotFound desc = could not find container \"f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3\": container with ID starting with f77db6ff60571bbe7e2ef81e78e1a89dafdd8c2b20165ff785010c8fca2edbb3 not found: ID does not exist" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147307 4760 scope.go:117] "RemoveContainer" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" Jan 21 16:25:05 crc kubenswrapper[4760]: E0121 16:25:05.147705 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862\": container with ID starting with ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862 not found: ID does not exist" containerID="ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.147726 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862"} err="failed to get container status \"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862\": rpc error: code = NotFound desc = could not find container \"ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862\": container with ID starting with ffd4c3463050beae08b5b08712985252c15082ef001b2fa98058089411b69862 not found: ID does not exist" Jan 21 16:25:05 crc kubenswrapper[4760]: I0121 16:25:05.635310 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" path="/var/lib/kubelet/pods/a6ad6c06-5c94-4c4f-a0ea-577481974f45/volumes" Jan 21 16:26:20 crc kubenswrapper[4760]: I0121 16:26:20.946056 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:26:20 crc kubenswrapper[4760]: I0121 16:26:20.946771 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:26:50 crc kubenswrapper[4760]: I0121 16:26:50.946026 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:26:50 crc kubenswrapper[4760]: I0121 16:26:50.946495 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.946613 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.947164 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.947213 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.948090 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:27:20 crc kubenswrapper[4760]: I0121 16:27:20.948158 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" gracePeriod=600 Jan 21 16:27:21 crc kubenswrapper[4760]: E0121 16:27:21.073212 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.158897 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" exitCode=0 Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.158954 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3"} Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.159020 4760 scope.go:117] "RemoveContainer" containerID="c2840f7fa52c7be05357223bdfaf116cc5b43319886b3e09ab5d04195688d268" Jan 21 16:27:21 crc kubenswrapper[4760]: I0121 16:27:21.159844 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:21 crc kubenswrapper[4760]: E0121 16:27:21.160096 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:33 crc kubenswrapper[4760]: I0121 16:27:33.622718 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:33 crc kubenswrapper[4760]: E0121 16:27:33.624603 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:46 crc kubenswrapper[4760]: I0121 16:27:46.622906 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:46 crc kubenswrapper[4760]: E0121 16:27:46.623740 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:27:58 crc kubenswrapper[4760]: I0121 16:27:58.622853 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:27:58 crc kubenswrapper[4760]: E0121 16:27:58.623557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:09 crc kubenswrapper[4760]: I0121 16:28:09.629219 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:09 crc kubenswrapper[4760]: E0121 16:28:09.630097 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:23 crc kubenswrapper[4760]: I0121 16:28:23.623283 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:23 crc kubenswrapper[4760]: E0121 16:28:23.624422 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:38 crc kubenswrapper[4760]: I0121 16:28:38.623720 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:38 crc kubenswrapper[4760]: E0121 16:28:38.624590 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:28:42 crc kubenswrapper[4760]: I0121 16:28:42.787262 4760 generic.go:334] "Generic (PLEG): container finished" podID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerID="0a10d6eba529254935110a96670ecf47a8c0c66342fb33262ccde3794d781379" exitCode=0 Jan 21 16:28:42 crc kubenswrapper[4760]: I0121 16:28:42.787359 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerDied","Data":"0a10d6eba529254935110a96670ecf47a8c0c66342fb33262ccde3794d781379"} Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.266012 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371501 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371665 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371713 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371767 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.371856 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") pod \"60b03623-4db5-445f-89b4-61f39ac04dc2\" (UID: \"60b03623-4db5-445f-89b4-61f39ac04dc2\") " Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.377528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.380078 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8" (OuterVolumeSpecName: "kube-api-access-qq6n8") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "kube-api-access-qq6n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.398789 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.401528 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory" (OuterVolumeSpecName: "inventory") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.409595 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "60b03623-4db5-445f-89b4-61f39ac04dc2" (UID: "60b03623-4db5-445f-89b4-61f39ac04dc2"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474574 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474620 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474634 4760 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474678 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60b03623-4db5-445f-89b4-61f39ac04dc2-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.474691 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq6n8\" (UniqueName: \"kubernetes.io/projected/60b03623-4db5-445f-89b4-61f39ac04dc2-kube-api-access-qq6n8\") on node \"crc\" DevicePath \"\"" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.805802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" event={"ID":"60b03623-4db5-445f-89b4-61f39ac04dc2","Type":"ContainerDied","Data":"5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9"} Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.805862 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5babf90ccad8bd8ad09c515a3d684264a9e6c9bd041a327a64e8b366f61b75a9" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.806355 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6f958" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.896480 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb"] Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.896950 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-utilities" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.896971 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-utilities" Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.896989 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.896996 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.897019 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897025 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" Jan 21 16:28:44 crc kubenswrapper[4760]: E0121 16:28:44.897041 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-content" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897047 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="extract-content" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897216 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ad6c06-5c94-4c4f-a0ea-577481974f45" containerName="registry-server" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897233 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="60b03623-4db5-445f-89b4-61f39ac04dc2" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.897855 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.902883 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.902958 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903133 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903279 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903424 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903527 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.903680 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.932129 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb"] Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985550 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985616 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985650 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985791 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.985826 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986102 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986203 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986236 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:44 crc kubenswrapper[4760]: I0121 16:28:44.986285 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088569 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088643 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088674 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088833 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088904 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088937 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.088964 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.089003 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.101111 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.108201 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.116188 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.119229 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.121064 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.123184 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.131529 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.134899 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.139159 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tqhjb\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.223860 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.522991 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb"] Jan 21 16:28:45 crc kubenswrapper[4760]: I0121 16:28:45.814959 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerStarted","Data":"314e0b98df798b35b060fc6a15fb89764580b694bd378f2bbf5fc245fa6b780c"} Jan 21 16:28:47 crc kubenswrapper[4760]: I0121 16:28:47.852166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerStarted","Data":"688236563b699e5b54a348115ab73a0c1ea5297bdbf44f8b861bebff1b72cb0c"} Jan 21 16:28:47 crc kubenswrapper[4760]: I0121 16:28:47.880039 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" podStartSLOduration=2.493058671 podStartE2EDuration="3.880021278s" podCreationTimestamp="2026-01-21 16:28:44 +0000 UTC" firstStartedPulling="2026-01-21 16:28:45.538162075 +0000 UTC m=+2496.205931643" lastFinishedPulling="2026-01-21 16:28:46.925124672 +0000 UTC m=+2497.592894250" observedRunningTime="2026-01-21 16:28:47.87767295 +0000 UTC m=+2498.545442528" watchObservedRunningTime="2026-01-21 16:28:47.880021278 +0000 UTC m=+2498.547790856" Jan 21 16:28:50 crc kubenswrapper[4760]: I0121 16:28:50.622949 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:28:50 crc kubenswrapper[4760]: E0121 16:28:50.623502 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:02 crc kubenswrapper[4760]: I0121 16:29:02.622474 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:02 crc kubenswrapper[4760]: E0121 16:29:02.623218 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:17 crc kubenswrapper[4760]: I0121 16:29:17.629281 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:17 crc kubenswrapper[4760]: E0121 16:29:17.629979 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:31 crc kubenswrapper[4760]: I0121 16:29:31.622398 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:31 crc kubenswrapper[4760]: E0121 16:29:31.623141 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:46 crc kubenswrapper[4760]: I0121 16:29:46.622869 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:46 crc kubenswrapper[4760]: E0121 16:29:46.623674 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:29:58 crc kubenswrapper[4760]: I0121 16:29:58.622962 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:29:58 crc kubenswrapper[4760]: E0121 16:29:58.623634 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.143191 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5"] Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.145661 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.148973 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.149353 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.160318 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5"] Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.240852 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.240933 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.241006 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.342501 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.342573 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.342647 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.343637 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.358412 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.359062 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"collect-profiles-29483550-g7kk5\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.470531 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:00 crc kubenswrapper[4760]: I0121 16:30:00.922897 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5"] Jan 21 16:30:01 crc kubenswrapper[4760]: I0121 16:30:01.023356 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" event={"ID":"2bff8864-1bb4-44c2-8b7b-869692e76f2c","Type":"ContainerStarted","Data":"4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e"} Jan 21 16:30:02 crc kubenswrapper[4760]: I0121 16:30:02.045702 4760 generic.go:334] "Generic (PLEG): container finished" podID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerID="ee53324c254b83333dfb9082ce11be652a506fd8e44cca6ee9d41630e19d9322" exitCode=0 Jan 21 16:30:02 crc kubenswrapper[4760]: I0121 16:30:02.045997 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" event={"ID":"2bff8864-1bb4-44c2-8b7b-869692e76f2c","Type":"ContainerDied","Data":"ee53324c254b83333dfb9082ce11be652a506fd8e44cca6ee9d41630e19d9322"} Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.387164 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.499150 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") pod \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.499232 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") pod \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.499607 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") pod \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\" (UID: \"2bff8864-1bb4-44c2-8b7b-869692e76f2c\") " Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.500735 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bff8864-1bb4-44c2-8b7b-869692e76f2c" (UID: "2bff8864-1bb4-44c2-8b7b-869692e76f2c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.505286 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bff8864-1bb4-44c2-8b7b-869692e76f2c" (UID: "2bff8864-1bb4-44c2-8b7b-869692e76f2c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.505739 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj" (OuterVolumeSpecName: "kube-api-access-c84nj") pod "2bff8864-1bb4-44c2-8b7b-869692e76f2c" (UID: "2bff8864-1bb4-44c2-8b7b-869692e76f2c"). InnerVolumeSpecName "kube-api-access-c84nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.602106 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bff8864-1bb4-44c2-8b7b-869692e76f2c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.602165 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bff8864-1bb4-44c2-8b7b-869692e76f2c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4760]: I0121 16:30:03.602180 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c84nj\" (UniqueName: \"kubernetes.io/projected/2bff8864-1bb4-44c2-8b7b-869692e76f2c-kube-api-access-c84nj\") on node \"crc\" DevicePath \"\"" Jan 21 16:30:03 crc kubenswrapper[4760]: E0121 16:30:03.699732 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bff8864_1bb4_44c2_8b7b_869692e76f2c.slice/crio-4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e\": RecentStats: unable to find data in memory cache]" Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.071791 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" event={"ID":"2bff8864-1bb4-44c2-8b7b-869692e76f2c","Type":"ContainerDied","Data":"4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e"} Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.071844 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c4e9b0676c968c30c3b4e1d45a3b27c93719c9f4ecfd10b034e2fb38796080e" Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.071917 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483550-g7kk5" Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.461987 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 16:30:04 crc kubenswrapper[4760]: I0121 16:30:04.469528 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483505-mt5nl"] Jan 21 16:30:05 crc kubenswrapper[4760]: I0121 16:30:05.634411 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a24dcb12-1228-4acf-bea2-864a7c159e6f" path="/var/lib/kubelet/pods/a24dcb12-1228-4acf-bea2-864a7c159e6f/volumes" Jan 21 16:30:13 crc kubenswrapper[4760]: I0121 16:30:13.623577 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:13 crc kubenswrapper[4760]: E0121 16:30:13.624420 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:26 crc kubenswrapper[4760]: I0121 16:30:26.622966 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:26 crc kubenswrapper[4760]: E0121 16:30:26.623800 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:41 crc kubenswrapper[4760]: I0121 16:30:41.622380 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:41 crc kubenswrapper[4760]: E0121 16:30:41.623188 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:30:42 crc kubenswrapper[4760]: I0121 16:30:42.773872 4760 scope.go:117] "RemoveContainer" containerID="d05d48c2e85f535cdd9d87b330fd379ffaeb0ab7b963c572924272cc4541df70" Jan 21 16:30:55 crc kubenswrapper[4760]: I0121 16:30:55.622991 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:30:55 crc kubenswrapper[4760]: E0121 16:30:55.623994 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:08 crc kubenswrapper[4760]: I0121 16:31:08.623044 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:08 crc kubenswrapper[4760]: E0121 16:31:08.624104 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:08 crc kubenswrapper[4760]: I0121 16:31:08.637666 4760 generic.go:334] "Generic (PLEG): container finished" podID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerID="688236563b699e5b54a348115ab73a0c1ea5297bdbf44f8b861bebff1b72cb0c" exitCode=0 Jan 21 16:31:08 crc kubenswrapper[4760]: I0121 16:31:08.637723 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerDied","Data":"688236563b699e5b54a348115ab73a0c1ea5297bdbf44f8b861bebff1b72cb0c"} Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.084417 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109293 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109360 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109409 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109429 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109452 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109478 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109505 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.109562 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") pod \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\" (UID: \"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1\") " Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.118683 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84" (OuterVolumeSpecName: "kube-api-access-q5z84") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "kube-api-access-q5z84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.133140 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.138464 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.142973 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.147680 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.149550 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory" (OuterVolumeSpecName: "inventory") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.151631 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.153280 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.177066 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" (UID: "5a4de6cd-9a26-49b4-a3f7-eb743b8830b1"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211467 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211714 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211792 4760 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211885 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.211964 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212037 4760 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212113 4760 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212191 4760 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.212279 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5z84\" (UniqueName: \"kubernetes.io/projected/5a4de6cd-9a26-49b4-a3f7-eb743b8830b1-kube-api-access-q5z84\") on node \"crc\" DevicePath \"\"" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.674330 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" event={"ID":"5a4de6cd-9a26-49b4-a3f7-eb743b8830b1","Type":"ContainerDied","Data":"314e0b98df798b35b060fc6a15fb89764580b694bd378f2bbf5fc245fa6b780c"} Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.674582 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314e0b98df798b35b060fc6a15fb89764580b694bd378f2bbf5fc245fa6b780c" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.674426 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tqhjb" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766091 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg"] Jan 21 16:31:10 crc kubenswrapper[4760]: E0121 16:31:10.766493 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766513 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: E0121 16:31:10.766553 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerName="collect-profiles" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766560 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerName="collect-profiles" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766719 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4de6cd-9a26-49b4-a3f7-eb743b8830b1" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.766744 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bff8864-1bb4-44c2-8b7b-869692e76f2c" containerName="collect-profiles" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.767364 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.771922 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.771946 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.772555 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-brqp8" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.772758 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.773120 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.789617 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg"] Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825604 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825665 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825710 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825748 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825849 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.825955 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927560 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927621 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927663 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927688 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927797 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.927832 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932663 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932718 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.932740 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.933148 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.933521 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:10 crc kubenswrapper[4760]: I0121 16:31:10.956363 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hblbg\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.094666 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.591996 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg"] Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.596815 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:31:11 crc kubenswrapper[4760]: I0121 16:31:11.692245 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerStarted","Data":"31795f99a764c9dcb18287799b3c1b0607a3f1c0281ec38f96872354a30d7bb2"} Jan 21 16:31:12 crc kubenswrapper[4760]: I0121 16:31:12.701583 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerStarted","Data":"60be004e0cbafa4e24c9b9e44bfc2f4bcf3ce08bafe0d21a9fdab5a69d5d10fb"} Jan 21 16:31:12 crc kubenswrapper[4760]: I0121 16:31:12.720183 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" podStartSLOduration=2.305734 podStartE2EDuration="2.720164111s" podCreationTimestamp="2026-01-21 16:31:10 +0000 UTC" firstStartedPulling="2026-01-21 16:31:11.59661401 +0000 UTC m=+2642.264383608" lastFinishedPulling="2026-01-21 16:31:12.011044141 +0000 UTC m=+2642.678813719" observedRunningTime="2026-01-21 16:31:12.718631073 +0000 UTC m=+2643.386400661" watchObservedRunningTime="2026-01-21 16:31:12.720164111 +0000 UTC m=+2643.387933689" Jan 21 16:31:20 crc kubenswrapper[4760]: I0121 16:31:20.622935 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:20 crc kubenswrapper[4760]: E0121 16:31:20.623895 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:35 crc kubenswrapper[4760]: I0121 16:31:35.622708 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:35 crc kubenswrapper[4760]: E0121 16:31:35.623535 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:31:48 crc kubenswrapper[4760]: I0121 16:31:48.622624 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:31:48 crc kubenswrapper[4760]: E0121 16:31:48.623473 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:32:02 crc kubenswrapper[4760]: I0121 16:32:02.622115 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:32:02 crc kubenswrapper[4760]: E0121 16:32:02.624166 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:32:15 crc kubenswrapper[4760]: I0121 16:32:15.622554 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:32:15 crc kubenswrapper[4760]: E0121 16:32:15.623305 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:32:30 crc kubenswrapper[4760]: I0121 16:32:30.622679 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:32:31 crc kubenswrapper[4760]: I0121 16:32:31.420615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0"} Jan 21 16:33:42 crc kubenswrapper[4760]: I0121 16:33:42.034710 4760 generic.go:334] "Generic (PLEG): container finished" podID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerID="60be004e0cbafa4e24c9b9e44bfc2f4bcf3ce08bafe0d21a9fdab5a69d5d10fb" exitCode=0 Jan 21 16:33:42 crc kubenswrapper[4760]: I0121 16:33:42.034786 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerDied","Data":"60be004e0cbafa4e24c9b9e44bfc2f4bcf3ce08bafe0d21a9fdab5a69d5d10fb"} Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.482602 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619207 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619377 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619446 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619480 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.619523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") pod \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\" (UID: \"bb09237a-f1eb-4d14-894f-ac460ce3b7c3\") " Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.627936 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh" (OuterVolumeSpecName: "kube-api-access-tvvhh") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "kube-api-access-tvvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.627942 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.648092 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.658002 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.661091 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.664930 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.679862 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory" (OuterVolumeSpecName: "inventory") pod "bb09237a-f1eb-4d14-894f-ac460ce3b7c3" (UID: "bb09237a-f1eb-4d14-894f-ac460ce3b7c3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.721784 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvvhh\" (UniqueName: \"kubernetes.io/projected/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-kube-api-access-tvvhh\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722042 4760 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722129 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722224 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722316 4760 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722514 4760 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:43 crc kubenswrapper[4760]: I0121 16:33:43.722578 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb09237a-f1eb-4d14-894f-ac460ce3b7c3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 16:33:44 crc kubenswrapper[4760]: I0121 16:33:44.061482 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" event={"ID":"bb09237a-f1eb-4d14-894f-ac460ce3b7c3","Type":"ContainerDied","Data":"31795f99a764c9dcb18287799b3c1b0607a3f1c0281ec38f96872354a30d7bb2"} Jan 21 16:33:44 crc kubenswrapper[4760]: I0121 16:33:44.061530 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31795f99a764c9dcb18287799b3c1b0607a3f1c0281ec38f96872354a30d7bb2" Jan 21 16:33:44 crc kubenswrapper[4760]: I0121 16:33:44.062081 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hblbg" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.598519 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:27 crc kubenswrapper[4760]: E0121 16:34:27.600544 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.600646 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.600912 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb09237a-f1eb-4d14-894f-ac460ce3b7c3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.602851 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.613852 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.766595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.766644 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.766694 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.868359 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.868403 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.868440 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.869251 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.869292 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.901621 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"redhat-operators-ktd7s\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:27 crc kubenswrapper[4760]: I0121 16:34:27.940911 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:28 crc kubenswrapper[4760]: I0121 16:34:28.450009 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:29 crc kubenswrapper[4760]: I0121 16:34:29.459224 4760 generic.go:334] "Generic (PLEG): container finished" podID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" exitCode=0 Jan 21 16:34:29 crc kubenswrapper[4760]: I0121 16:34:29.459280 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11"} Jan 21 16:34:29 crc kubenswrapper[4760]: I0121 16:34:29.460757 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerStarted","Data":"43d9064a9976283c3189b3ed787dc3c4204813a18ce96ba40699f3b46e2f166d"} Jan 21 16:34:32 crc kubenswrapper[4760]: I0121 16:34:32.489926 4760 generic.go:334] "Generic (PLEG): container finished" podID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" exitCode=0 Jan 21 16:34:32 crc kubenswrapper[4760]: I0121 16:34:32.490002 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b"} Jan 21 16:34:34 crc kubenswrapper[4760]: I0121 16:34:34.513148 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerStarted","Data":"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5"} Jan 21 16:34:34 crc kubenswrapper[4760]: I0121 16:34:34.533794 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ktd7s" podStartSLOduration=3.481337871 podStartE2EDuration="7.533775133s" podCreationTimestamp="2026-01-21 16:34:27 +0000 UTC" firstStartedPulling="2026-01-21 16:34:29.463860885 +0000 UTC m=+2840.131630463" lastFinishedPulling="2026-01-21 16:34:33.516298147 +0000 UTC m=+2844.184067725" observedRunningTime="2026-01-21 16:34:34.530238876 +0000 UTC m=+2845.198008464" watchObservedRunningTime="2026-01-21 16:34:34.533775133 +0000 UTC m=+2845.201544711" Jan 21 16:34:37 crc kubenswrapper[4760]: I0121 16:34:37.941468 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:37 crc kubenswrapper[4760]: I0121 16:34:37.941805 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:38 crc kubenswrapper[4760]: I0121 16:34:38.988504 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ktd7s" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" probeResult="failure" output=< Jan 21 16:34:38 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:34:38 crc kubenswrapper[4760]: > Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.164589 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.167043 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.182951 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.291621 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.291715 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.291905 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.393754 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.393855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.393918 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.394479 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.394512 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.420393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"community-operators-9ng74\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.490001 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.491231 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.493918 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tg5qr" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.493935 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.494534 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.494837 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.498547 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.518530 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596606 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596739 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596777 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596805 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596833 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596881 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596911 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.596955 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698624 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698693 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698736 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698778 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698799 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698855 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698875 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.698906 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.699436 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.699864 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.718575 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.719697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.719830 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.723704 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.724114 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.726716 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.730600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.734145 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " pod="openstack/tempest-tests-tempest" Jan 21 16:34:39 crc kubenswrapper[4760]: I0121 16:34:39.811625 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.026598 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.281690 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 16:34:40 crc kubenswrapper[4760]: W0121 16:34:40.299249 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod061a538a_0f39_44c0_9c33_e96701ced31e.slice/crio-d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df WatchSource:0}: Error finding container d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df: Status 404 returned error can't find the container with id d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.595911 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerID="9f904bd92db2d6eee2e6eb471361b0caf9b97ee4fd67ea14a23106c0b6a901c4" exitCode=0 Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.596029 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"9f904bd92db2d6eee2e6eb471361b0caf9b97ee4fd67ea14a23106c0b6a901c4"} Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.596056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerStarted","Data":"7b29fe7b0284d9b46678cfa4e656bf98afd65507ee4be73e88d68ec5de6cde76"} Jan 21 16:34:40 crc kubenswrapper[4760]: I0121 16:34:40.597341 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerStarted","Data":"d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df"} Jan 21 16:34:41 crc kubenswrapper[4760]: I0121 16:34:41.607401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerStarted","Data":"ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7"} Jan 21 16:34:42 crc kubenswrapper[4760]: I0121 16:34:42.618075 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerID="ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7" exitCode=0 Jan 21 16:34:42 crc kubenswrapper[4760]: I0121 16:34:42.618118 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7"} Jan 21 16:34:46 crc kubenswrapper[4760]: I0121 16:34:46.655992 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerStarted","Data":"844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287"} Jan 21 16:34:46 crc kubenswrapper[4760]: I0121 16:34:46.674602 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ng74" podStartSLOduration=2.053516385 podStartE2EDuration="7.67458328s" podCreationTimestamp="2026-01-21 16:34:39 +0000 UTC" firstStartedPulling="2026-01-21 16:34:40.598134236 +0000 UTC m=+2851.265903814" lastFinishedPulling="2026-01-21 16:34:46.219201131 +0000 UTC m=+2856.886970709" observedRunningTime="2026-01-21 16:34:46.673034152 +0000 UTC m=+2857.340803750" watchObservedRunningTime="2026-01-21 16:34:46.67458328 +0000 UTC m=+2857.342352878" Jan 21 16:34:48 crc kubenswrapper[4760]: I0121 16:34:48.027423 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:48 crc kubenswrapper[4760]: I0121 16:34:48.079769 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:48 crc kubenswrapper[4760]: I0121 16:34:48.614360 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:49 crc kubenswrapper[4760]: I0121 16:34:49.499306 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:49 crc kubenswrapper[4760]: I0121 16:34:49.499386 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:49 crc kubenswrapper[4760]: I0121 16:34:49.679654 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ktd7s" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" containerID="cri-o://48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" gracePeriod=2 Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.517968 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.558679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") pod \"a535920e-5aa2-48bf-bb4f-4b7215145882\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.558826 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") pod \"a535920e-5aa2-48bf-bb4f-4b7215145882\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.558857 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") pod \"a535920e-5aa2-48bf-bb4f-4b7215145882\" (UID: \"a535920e-5aa2-48bf-bb4f-4b7215145882\") " Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.560785 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities" (OuterVolumeSpecName: "utilities") pod "a535920e-5aa2-48bf-bb4f-4b7215145882" (UID: "a535920e-5aa2-48bf-bb4f-4b7215145882"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.561174 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9ng74" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" probeResult="failure" output=< Jan 21 16:34:50 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:34:50 crc kubenswrapper[4760]: > Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.577615 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t" (OuterVolumeSpecName: "kube-api-access-p545t") pod "a535920e-5aa2-48bf-bb4f-4b7215145882" (UID: "a535920e-5aa2-48bf-bb4f-4b7215145882"). InnerVolumeSpecName "kube-api-access-p545t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.662030 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.662065 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p545t\" (UniqueName: \"kubernetes.io/projected/a535920e-5aa2-48bf-bb4f-4b7215145882-kube-api-access-p545t\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.672839 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a535920e-5aa2-48bf-bb4f-4b7215145882" (UID: "a535920e-5aa2-48bf-bb4f-4b7215145882"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690523 4760 generic.go:334] "Generic (PLEG): container finished" podID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" exitCode=0 Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690565 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5"} Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690592 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ktd7s" event={"ID":"a535920e-5aa2-48bf-bb4f-4b7215145882","Type":"ContainerDied","Data":"43d9064a9976283c3189b3ed787dc3c4204813a18ce96ba40699f3b46e2f166d"} Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690608 4760 scope.go:117] "RemoveContainer" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.690727 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ktd7s" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.725192 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.734761 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ktd7s"] Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.738037 4760 scope.go:117] "RemoveContainer" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.763264 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a535920e-5aa2-48bf-bb4f-4b7215145882-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.778040 4760 scope.go:117] "RemoveContainer" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.816112 4760 scope.go:117] "RemoveContainer" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" Jan 21 16:34:50 crc kubenswrapper[4760]: E0121 16:34:50.816720 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5\": container with ID starting with 48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5 not found: ID does not exist" containerID="48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.816769 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5"} err="failed to get container status \"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5\": rpc error: code = NotFound desc = could not find container \"48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5\": container with ID starting with 48f2ea343b4779d5968feeeb42f9bbc310cb57047cdfee682f4fd5f1601352f5 not found: ID does not exist" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.816789 4760 scope.go:117] "RemoveContainer" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" Jan 21 16:34:50 crc kubenswrapper[4760]: E0121 16:34:50.817454 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b\": container with ID starting with 5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b not found: ID does not exist" containerID="5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.817498 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b"} err="failed to get container status \"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b\": rpc error: code = NotFound desc = could not find container \"5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b\": container with ID starting with 5ec2072ada81acda0f8d7634ef5723530e7bf390670b24826e86ec285347099b not found: ID does not exist" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.817517 4760 scope.go:117] "RemoveContainer" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" Jan 21 16:34:50 crc kubenswrapper[4760]: E0121 16:34:50.817854 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11\": container with ID starting with b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11 not found: ID does not exist" containerID="b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.817929 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11"} err="failed to get container status \"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11\": rpc error: code = NotFound desc = could not find container \"b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11\": container with ID starting with b1427fc31ad44be8e9c3a4d5c19db48a2d0e12b7ed8be09a219bf325a5905c11 not found: ID does not exist" Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.946066 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:34:50 crc kubenswrapper[4760]: I0121 16:34:50.946428 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:34:51 crc kubenswrapper[4760]: I0121 16:34:51.644947 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" path="/var/lib/kubelet/pods/a535920e-5aa2-48bf-bb4f-4b7215145882/volumes" Jan 21 16:34:59 crc kubenswrapper[4760]: I0121 16:34:59.553262 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:59 crc kubenswrapper[4760]: I0121 16:34:59.611151 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:34:59 crc kubenswrapper[4760]: I0121 16:34:59.796756 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:35:00 crc kubenswrapper[4760]: I0121 16:35:00.783556 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9ng74" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" containerID="cri-o://844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" gracePeriod=2 Jan 21 16:35:01 crc kubenswrapper[4760]: I0121 16:35:01.798877 4760 generic.go:334] "Generic (PLEG): container finished" podID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" exitCode=0 Jan 21 16:35:01 crc kubenswrapper[4760]: I0121 16:35:01.798962 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287"} Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.500108 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.501630 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.502110 4760 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 16:35:09 crc kubenswrapper[4760]: E0121 16:35:09.502184 4760 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-9ng74" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.845729 4760 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.846152 4760 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwfvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(061a538a-0f39-44c0-9c33-e96701ced31e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.847736 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" Jan 21 16:35:10 crc kubenswrapper[4760]: E0121 16:35:10.873404 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.092520 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.230536 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") pod \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.230602 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") pod \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.230738 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") pod \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\" (UID: \"ae6a8eec-7d32-4ef8-883e-2de2476b54cc\") " Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.231991 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities" (OuterVolumeSpecName: "utilities") pod "ae6a8eec-7d32-4ef8-883e-2de2476b54cc" (UID: "ae6a8eec-7d32-4ef8-883e-2de2476b54cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.243137 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr" (OuterVolumeSpecName: "kube-api-access-t5kbr") pod "ae6a8eec-7d32-4ef8-883e-2de2476b54cc" (UID: "ae6a8eec-7d32-4ef8-883e-2de2476b54cc"). InnerVolumeSpecName "kube-api-access-t5kbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.269585 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae6a8eec-7d32-4ef8-883e-2de2476b54cc" (UID: "ae6a8eec-7d32-4ef8-883e-2de2476b54cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.332846 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.332881 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.332891 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5kbr\" (UniqueName: \"kubernetes.io/projected/ae6a8eec-7d32-4ef8-883e-2de2476b54cc-kube-api-access-t5kbr\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.884827 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ng74" event={"ID":"ae6a8eec-7d32-4ef8-883e-2de2476b54cc","Type":"ContainerDied","Data":"7b29fe7b0284d9b46678cfa4e656bf98afd65507ee4be73e88d68ec5de6cde76"} Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.885125 4760 scope.go:117] "RemoveContainer" containerID="844698888e0d03e12ae041b5c1b2730438f292a2d61eb50ebe2e2f46703c7287" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.885003 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ng74" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.914525 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.914984 4760 scope.go:117] "RemoveContainer" containerID="ce7669346ee74af4a14d39dc8041188e262c34258bd81630fa455c94dd3d5ff7" Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.924111 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9ng74"] Jan 21 16:35:11 crc kubenswrapper[4760]: I0121 16:35:11.944216 4760 scope.go:117] "RemoveContainer" containerID="9f904bd92db2d6eee2e6eb471361b0caf9b97ee4fd67ea14a23106c0b6a901c4" Jan 21 16:35:13 crc kubenswrapper[4760]: I0121 16:35:13.633975 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" path="/var/lib/kubelet/pods/ae6a8eec-7d32-4ef8-883e-2de2476b54cc/volumes" Jan 21 16:35:20 crc kubenswrapper[4760]: I0121 16:35:20.946586 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:20 crc kubenswrapper[4760]: I0121 16:35:20.947106 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:27 crc kubenswrapper[4760]: I0121 16:35:27.199737 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 16:35:29 crc kubenswrapper[4760]: I0121 16:35:29.057217 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerStarted","Data":"07ddae2dc9f99ce4c063d1dc9b89965d4135b8ad24f73e8f85faabfb35ed3463"} Jan 21 16:35:29 crc kubenswrapper[4760]: I0121 16:35:29.080110 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.185924651 podStartE2EDuration="51.080087008s" podCreationTimestamp="2026-01-21 16:34:38 +0000 UTC" firstStartedPulling="2026-01-21 16:34:40.302470923 +0000 UTC m=+2850.970240501" lastFinishedPulling="2026-01-21 16:35:27.19663328 +0000 UTC m=+2897.864402858" observedRunningTime="2026-01-21 16:35:29.074510961 +0000 UTC m=+2899.742280559" watchObservedRunningTime="2026-01-21 16:35:29.080087008 +0000 UTC m=+2899.747856606" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.450245 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451018 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451032 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451047 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451053 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451064 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451070 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-utilities" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451083 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451089 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451102 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451107 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: E0121 16:35:41.451132 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451138 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="extract-content" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451410 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6a8eec-7d32-4ef8-883e-2de2476b54cc" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.451424 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a535920e-5aa2-48bf-bb4f-4b7215145882" containerName="registry-server" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.452913 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.478617 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.577163 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.577292 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.577485 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.678881 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679238 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679355 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679829 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.679857 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.705967 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"certified-operators-q5jh9\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:41 crc kubenswrapper[4760]: I0121 16:35:41.771787 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.079080 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.081140 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.112813 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.161183 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.188901 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.189032 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.189197 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.203065 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerStarted","Data":"75284eb40155a403f823c9ed62125ce692b33e7815220285a66fa46bfa337cf3"} Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.290759 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291241 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291406 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291417 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.291697 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.331600 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"redhat-marketplace-8k7dc\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.435561 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:42 crc kubenswrapper[4760]: I0121 16:35:42.715969 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:42 crc kubenswrapper[4760]: W0121 16:35:42.733210 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d823ca0_c452_4095_a5b0_910667cc2673.slice/crio-863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7 WatchSource:0}: Error finding container 863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7: Status 404 returned error can't find the container with id 863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7 Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.214142 4760 generic.go:334] "Generic (PLEG): container finished" podID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" exitCode=0 Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.214197 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9"} Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.216426 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d823ca0-c452-4095-a5b0-910667cc2673" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" exitCode=0 Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.216461 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57"} Jan 21 16:35:43 crc kubenswrapper[4760]: I0121 16:35:43.216530 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerStarted","Data":"863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7"} Jan 21 16:35:45 crc kubenswrapper[4760]: I0121 16:35:45.234938 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d823ca0-c452-4095-a5b0-910667cc2673" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" exitCode=0 Jan 21 16:35:45 crc kubenswrapper[4760]: I0121 16:35:45.234999 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6"} Jan 21 16:35:46 crc kubenswrapper[4760]: I0121 16:35:46.250529 4760 generic.go:334] "Generic (PLEG): container finished" podID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" exitCode=0 Jan 21 16:35:46 crc kubenswrapper[4760]: I0121 16:35:46.250727 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9"} Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.259493 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerStarted","Data":"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa"} Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.261539 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerStarted","Data":"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971"} Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.279657 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8k7dc" podStartSLOduration=1.832225078 podStartE2EDuration="5.279638215s" podCreationTimestamp="2026-01-21 16:35:42 +0000 UTC" firstStartedPulling="2026-01-21 16:35:43.218379216 +0000 UTC m=+2913.886148794" lastFinishedPulling="2026-01-21 16:35:46.665792353 +0000 UTC m=+2917.333561931" observedRunningTime="2026-01-21 16:35:47.277829541 +0000 UTC m=+2917.945599129" watchObservedRunningTime="2026-01-21 16:35:47.279638215 +0000 UTC m=+2917.947407793" Jan 21 16:35:47 crc kubenswrapper[4760]: I0121 16:35:47.300338 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5jh9" podStartSLOduration=2.788382549 podStartE2EDuration="6.300312495s" podCreationTimestamp="2026-01-21 16:35:41 +0000 UTC" firstStartedPulling="2026-01-21 16:35:43.216065639 +0000 UTC m=+2913.883835217" lastFinishedPulling="2026-01-21 16:35:46.727995585 +0000 UTC m=+2917.395765163" observedRunningTime="2026-01-21 16:35:47.296215494 +0000 UTC m=+2917.963985072" watchObservedRunningTime="2026-01-21 16:35:47.300312495 +0000 UTC m=+2917.968082073" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.946034 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.946381 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.946424 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.947162 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:35:50 crc kubenswrapper[4760]: I0121 16:35:50.947213 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0" gracePeriod=600 Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.295712 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0" exitCode=0 Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.295751 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0"} Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.295783 4760 scope.go:117] "RemoveContainer" containerID="ff059cd7ee64fc1bb7edf6f6979bcf4dea70be92be03f71b7ac9ec4f0c0cddc3" Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.773069 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.775229 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:51 crc kubenswrapper[4760]: I0121 16:35:51.821109 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.308568 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f"} Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.356918 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.405677 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.436845 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.436977 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:52 crc kubenswrapper[4760]: I0121 16:35:52.483129 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:53 crc kubenswrapper[4760]: I0121 16:35:53.362356 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.324653 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q5jh9" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" containerID="cri-o://d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" gracePeriod=2 Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.659860 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.801157 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.838026 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") pod \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.839249 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities" (OuterVolumeSpecName: "utilities") pod "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" (UID: "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.839556 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") pod \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.839701 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") pod \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\" (UID: \"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5\") " Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.844165 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.846911 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc" (OuterVolumeSpecName: "kube-api-access-5qbfc") pod "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" (UID: "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5"). InnerVolumeSpecName "kube-api-access-5qbfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.903266 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" (UID: "0dea1f41-c019-4d7c-bef3-8b85c7ffabd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.946468 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qbfc\" (UniqueName: \"kubernetes.io/projected/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-kube-api-access-5qbfc\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:54 crc kubenswrapper[4760]: I0121 16:35:54.946507 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334156 4760 generic.go:334] "Generic (PLEG): container finished" podID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" exitCode=0 Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334212 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jh9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334194 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971"} Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334517 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jh9" event={"ID":"0dea1f41-c019-4d7c-bef3-8b85c7ffabd5","Type":"ContainerDied","Data":"75284eb40155a403f823c9ed62125ce692b33e7815220285a66fa46bfa337cf3"} Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334540 4760 scope.go:117] "RemoveContainer" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.334798 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8k7dc" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" containerID="cri-o://ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" gracePeriod=2 Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.367926 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.379046 4760 scope.go:117] "RemoveContainer" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.379575 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q5jh9"] Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.428490 4760 scope.go:117] "RemoveContainer" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.528733 4760 scope.go:117] "RemoveContainer" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" Jan 21 16:35:55 crc kubenswrapper[4760]: E0121 16:35:55.529909 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971\": container with ID starting with d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971 not found: ID does not exist" containerID="d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.529954 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971"} err="failed to get container status \"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971\": rpc error: code = NotFound desc = could not find container \"d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971\": container with ID starting with d45a8b43430ac9a3348097231d51b3aa698edac05930339f6bd352e3e18e5971 not found: ID does not exist" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.529979 4760 scope.go:117] "RemoveContainer" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" Jan 21 16:35:55 crc kubenswrapper[4760]: E0121 16:35:55.530420 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9\": container with ID starting with 2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9 not found: ID does not exist" containerID="2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.530482 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9"} err="failed to get container status \"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9\": rpc error: code = NotFound desc = could not find container \"2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9\": container with ID starting with 2126e37bb56af93647682a90801cb7d43bc67b7e0a6a5b0ab907de20e72dc0e9 not found: ID does not exist" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.530527 4760 scope.go:117] "RemoveContainer" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" Jan 21 16:35:55 crc kubenswrapper[4760]: E0121 16:35:55.531342 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9\": container with ID starting with b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9 not found: ID does not exist" containerID="b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.531381 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9"} err="failed to get container status \"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9\": rpc error: code = NotFound desc = could not find container \"b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9\": container with ID starting with b9942cd0e31eeaf50684c201df68eb53cf56d2fcd485a021c3a6f4d0a22d43d9 not found: ID does not exist" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.643268 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" path="/var/lib/kubelet/pods/0dea1f41-c019-4d7c-bef3-8b85c7ffabd5/volumes" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.775533 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.863010 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") pod \"6d823ca0-c452-4095-a5b0-910667cc2673\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.863270 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") pod \"6d823ca0-c452-4095-a5b0-910667cc2673\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.863366 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") pod \"6d823ca0-c452-4095-a5b0-910667cc2673\" (UID: \"6d823ca0-c452-4095-a5b0-910667cc2673\") " Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.865226 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities" (OuterVolumeSpecName: "utilities") pod "6d823ca0-c452-4095-a5b0-910667cc2673" (UID: "6d823ca0-c452-4095-a5b0-910667cc2673"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.871239 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r" (OuterVolumeSpecName: "kube-api-access-pnp6r") pod "6d823ca0-c452-4095-a5b0-910667cc2673" (UID: "6d823ca0-c452-4095-a5b0-910667cc2673"). InnerVolumeSpecName "kube-api-access-pnp6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.885197 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d823ca0-c452-4095-a5b0-910667cc2673" (UID: "6d823ca0-c452-4095-a5b0-910667cc2673"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.965916 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.965946 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d823ca0-c452-4095-a5b0-910667cc2673-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:55 crc kubenswrapper[4760]: I0121 16:35:55.965956 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnp6r\" (UniqueName: \"kubernetes.io/projected/6d823ca0-c452-4095-a5b0-910667cc2673-kube-api-access-pnp6r\") on node \"crc\" DevicePath \"\"" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.366415 4760 generic.go:334] "Generic (PLEG): container finished" podID="6d823ca0-c452-4095-a5b0-910667cc2673" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" exitCode=0 Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.366778 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k7dc" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.367458 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa"} Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.367521 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k7dc" event={"ID":"6d823ca0-c452-4095-a5b0-910667cc2673","Type":"ContainerDied","Data":"863afe0c7413f4816b7da21739c21cd8a93720a5c09104e4ea1462e5d68bcce7"} Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.367545 4760 scope.go:117] "RemoveContainer" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.406189 4760 scope.go:117] "RemoveContainer" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.417477 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.429202 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k7dc"] Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.435143 4760 scope.go:117] "RemoveContainer" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.475611 4760 scope.go:117] "RemoveContainer" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" Jan 21 16:35:56 crc kubenswrapper[4760]: E0121 16:35:56.476272 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa\": container with ID starting with ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa not found: ID does not exist" containerID="ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.476455 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa"} err="failed to get container status \"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa\": rpc error: code = NotFound desc = could not find container \"ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa\": container with ID starting with ac7c1d55f9539d28dea34a9d406fa4a167a5132b5c837f6454a700c1e2f53faa not found: ID does not exist" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.476635 4760 scope.go:117] "RemoveContainer" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" Jan 21 16:35:56 crc kubenswrapper[4760]: E0121 16:35:56.477149 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6\": container with ID starting with 7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6 not found: ID does not exist" containerID="7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.477202 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6"} err="failed to get container status \"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6\": rpc error: code = NotFound desc = could not find container \"7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6\": container with ID starting with 7edfa899ce77834778aec0ed40da107198434677481c5937cc7b967eccc1aef6 not found: ID does not exist" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.477232 4760 scope.go:117] "RemoveContainer" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" Jan 21 16:35:56 crc kubenswrapper[4760]: E0121 16:35:56.477551 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57\": container with ID starting with fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57 not found: ID does not exist" containerID="fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57" Jan 21 16:35:56 crc kubenswrapper[4760]: I0121 16:35:56.477591 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57"} err="failed to get container status \"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57\": rpc error: code = NotFound desc = could not find container \"fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57\": container with ID starting with fbbd5b3a8edf04b2ad933f1b37d2fdf5a430babf1fa4c10e888dee4c7d167b57 not found: ID does not exist" Jan 21 16:35:57 crc kubenswrapper[4760]: I0121 16:35:57.633476 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" path="/var/lib/kubelet/pods/6d823ca0-c452-4095-a5b0-910667cc2673/volumes" Jan 21 16:38:20 crc kubenswrapper[4760]: I0121 16:38:20.945786 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:38:20 crc kubenswrapper[4760]: I0121 16:38:20.946346 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:38:50 crc kubenswrapper[4760]: I0121 16:38:50.946635 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:38:50 crc kubenswrapper[4760]: I0121 16:38:50.947298 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.946240 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.946814 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.946865 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.947697 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:39:20 crc kubenswrapper[4760]: I0121 16:39:20.947754 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" gracePeriod=600 Jan 21 16:39:21 crc kubenswrapper[4760]: E0121 16:39:21.067271 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316419 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" exitCode=0 Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316446 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f"} Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316489 4760 scope.go:117] "RemoveContainer" containerID="eed7297f5e40d58e742e5926895158e315f6a80627f5a0d2303804b9428244b0" Jan 21 16:39:21 crc kubenswrapper[4760]: I0121 16:39:21.316820 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:39:21 crc kubenswrapper[4760]: E0121 16:39:21.317053 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:39:34 crc kubenswrapper[4760]: I0121 16:39:34.623352 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:39:34 crc kubenswrapper[4760]: E0121 16:39:34.624087 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:39:48 crc kubenswrapper[4760]: I0121 16:39:48.623531 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:39:48 crc kubenswrapper[4760]: E0121 16:39:48.625730 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:00 crc kubenswrapper[4760]: I0121 16:40:00.623378 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:00 crc kubenswrapper[4760]: E0121 16:40:00.624307 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:11 crc kubenswrapper[4760]: I0121 16:40:11.622254 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:11 crc kubenswrapper[4760]: E0121 16:40:11.623000 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:26 crc kubenswrapper[4760]: I0121 16:40:26.622462 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:26 crc kubenswrapper[4760]: E0121 16:40:26.623428 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:38 crc kubenswrapper[4760]: I0121 16:40:38.622421 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:38 crc kubenswrapper[4760]: E0121 16:40:38.623017 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:40:52 crc kubenswrapper[4760]: I0121 16:40:52.623695 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:40:52 crc kubenswrapper[4760]: E0121 16:40:52.625045 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:03 crc kubenswrapper[4760]: I0121 16:41:03.623043 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:03 crc kubenswrapper[4760]: E0121 16:41:03.623846 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:16 crc kubenswrapper[4760]: I0121 16:41:16.622857 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:16 crc kubenswrapper[4760]: E0121 16:41:16.625385 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:30 crc kubenswrapper[4760]: I0121 16:41:30.622896 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:30 crc kubenswrapper[4760]: E0121 16:41:30.623574 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:41 crc kubenswrapper[4760]: I0121 16:41:41.622904 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:41 crc kubenswrapper[4760]: E0121 16:41:41.623777 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:41:52 crc kubenswrapper[4760]: I0121 16:41:52.622571 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:41:52 crc kubenswrapper[4760]: E0121 16:41:52.624557 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:03 crc kubenswrapper[4760]: I0121 16:42:03.622612 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:03 crc kubenswrapper[4760]: E0121 16:42:03.623468 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:17 crc kubenswrapper[4760]: I0121 16:42:17.623180 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:17 crc kubenswrapper[4760]: E0121 16:42:17.623903 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:32 crc kubenswrapper[4760]: I0121 16:42:32.622822 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:32 crc kubenswrapper[4760]: E0121 16:42:32.624617 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:43 crc kubenswrapper[4760]: I0121 16:42:43.622858 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:43 crc kubenswrapper[4760]: E0121 16:42:43.623847 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:42:55 crc kubenswrapper[4760]: I0121 16:42:55.623007 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:42:55 crc kubenswrapper[4760]: E0121 16:42:55.623766 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:09 crc kubenswrapper[4760]: I0121 16:43:09.634444 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:09 crc kubenswrapper[4760]: E0121 16:43:09.635205 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:23 crc kubenswrapper[4760]: I0121 16:43:23.622170 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:23 crc kubenswrapper[4760]: E0121 16:43:23.622880 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:37 crc kubenswrapper[4760]: I0121 16:43:37.623285 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:37 crc kubenswrapper[4760]: E0121 16:43:37.624442 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:43:49 crc kubenswrapper[4760]: I0121 16:43:49.628761 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:43:49 crc kubenswrapper[4760]: E0121 16:43:49.629724 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:44:03 crc kubenswrapper[4760]: I0121 16:44:03.622574 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:44:03 crc kubenswrapper[4760]: E0121 16:44:03.623383 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:44:16 crc kubenswrapper[4760]: I0121 16:44:16.623011 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:44:16 crc kubenswrapper[4760]: E0121 16:44:16.623738 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:44:28 crc kubenswrapper[4760]: I0121 16:44:28.623524 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:44:28 crc kubenswrapper[4760]: I0121 16:44:28.849828 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce"} Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.152671 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6"] Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153492 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153505 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153524 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153531 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153552 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153559 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-content" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153567 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153572 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153585 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153590 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: E0121 16:45:00.153604 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153610 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="extract-utilities" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153772 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d823ca0-c452-4095-a5b0-910667cc2673" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.153782 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dea1f41-c019-4d7c-bef3-8b85c7ffabd5" containerName="registry-server" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.154613 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.167569 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6"] Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.171103 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.173367 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.295801 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.295853 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.295896 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.398544 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.398602 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.398660 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.400032 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.407403 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.414222 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"collect-profiles-29483565-8mgq6\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.482838 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:00 crc kubenswrapper[4760]: I0121 16:45:00.976378 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6"] Jan 21 16:45:01 crc kubenswrapper[4760]: I0121 16:45:01.126558 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" event={"ID":"a5498160-60f5-4e39-8a43-fc4443b4c033","Type":"ContainerStarted","Data":"1a8ee9b61bf9b82b4f1499d8b2a2501a6d22757471c669116613ac00ba1a3618"} Jan 21 16:45:02 crc kubenswrapper[4760]: I0121 16:45:02.139685 4760 generic.go:334] "Generic (PLEG): container finished" podID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerID="a86df7b242ca5e087391b26a06e993d3c4d611772c32e17898217ceb4f290710" exitCode=0 Jan 21 16:45:02 crc kubenswrapper[4760]: I0121 16:45:02.139768 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" event={"ID":"a5498160-60f5-4e39-8a43-fc4443b4c033","Type":"ContainerDied","Data":"a86df7b242ca5e087391b26a06e993d3c4d611772c32e17898217ceb4f290710"} Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.510717 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.667224 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") pod \"a5498160-60f5-4e39-8a43-fc4443b4c033\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.667988 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5498160-60f5-4e39-8a43-fc4443b4c033" (UID: "a5498160-60f5-4e39-8a43-fc4443b4c033"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.668038 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") pod \"a5498160-60f5-4e39-8a43-fc4443b4c033\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.668098 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") pod \"a5498160-60f5-4e39-8a43-fc4443b4c033\" (UID: \"a5498160-60f5-4e39-8a43-fc4443b4c033\") " Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.670729 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5498160-60f5-4e39-8a43-fc4443b4c033-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.673032 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5498160-60f5-4e39-8a43-fc4443b4c033" (UID: "a5498160-60f5-4e39-8a43-fc4443b4c033"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.675456 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp" (OuterVolumeSpecName: "kube-api-access-ll7vp") pod "a5498160-60f5-4e39-8a43-fc4443b4c033" (UID: "a5498160-60f5-4e39-8a43-fc4443b4c033"). InnerVolumeSpecName "kube-api-access-ll7vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.771387 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll7vp\" (UniqueName: \"kubernetes.io/projected/a5498160-60f5-4e39-8a43-fc4443b4c033-kube-api-access-ll7vp\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:03 crc kubenswrapper[4760]: I0121 16:45:03.771417 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5498160-60f5-4e39-8a43-fc4443b4c033-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.156267 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" event={"ID":"a5498160-60f5-4e39-8a43-fc4443b4c033","Type":"ContainerDied","Data":"1a8ee9b61bf9b82b4f1499d8b2a2501a6d22757471c669116613ac00ba1a3618"} Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.156679 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a8ee9b61bf9b82b4f1499d8b2a2501a6d22757471c669116613ac00ba1a3618" Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.156753 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483565-8mgq6" Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.584049 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:45:04 crc kubenswrapper[4760]: I0121 16:45:04.592746 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483520-59n5f"] Jan 21 16:45:05 crc kubenswrapper[4760]: I0121 16:45:05.635546 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b71e327-2590-4a0d-8f08-44d58d095169" path="/var/lib/kubelet/pods/2b71e327-2590-4a0d-8f08-44d58d095169/volumes" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.869839 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:06 crc kubenswrapper[4760]: E0121 16:45:06.870584 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerName="collect-profiles" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.870602 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerName="collect-profiles" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.870851 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5498160-60f5-4e39-8a43-fc4443b4c033" containerName="collect-profiles" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.873183 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.880155 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.928588 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.928668 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:06 crc kubenswrapper[4760]: I0121 16:45:06.928716 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.030477 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.030581 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.030695 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.031119 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.031132 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.050586 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"redhat-operators-56ms9\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.196130 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:07 crc kubenswrapper[4760]: I0121 16:45:07.449438 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.189033 4760 generic.go:334] "Generic (PLEG): container finished" podID="71cb570f-5116-460d-925d-f17db909d248" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" exitCode=0 Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.189442 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e"} Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.190717 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerStarted","Data":"4b0a5128f06bf586631fbf12d4cd8a0b90f5641b8cc45d39ececc5ad842f4fe4"} Jan 21 16:45:08 crc kubenswrapper[4760]: I0121 16:45:08.191510 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:45:10 crc kubenswrapper[4760]: I0121 16:45:10.207908 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerStarted","Data":"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b"} Jan 21 16:45:12 crc kubenswrapper[4760]: I0121 16:45:12.226355 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b"} Jan 21 16:45:12 crc kubenswrapper[4760]: I0121 16:45:12.226359 4760 generic.go:334] "Generic (PLEG): container finished" podID="71cb570f-5116-460d-925d-f17db909d248" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" exitCode=0 Jan 21 16:45:14 crc kubenswrapper[4760]: I0121 16:45:14.243335 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerStarted","Data":"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d"} Jan 21 16:45:14 crc kubenswrapper[4760]: I0121 16:45:14.264089 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56ms9" podStartSLOduration=3.364873643 podStartE2EDuration="8.264067551s" podCreationTimestamp="2026-01-21 16:45:06 +0000 UTC" firstStartedPulling="2026-01-21 16:45:08.191294097 +0000 UTC m=+3478.859063675" lastFinishedPulling="2026-01-21 16:45:13.090488005 +0000 UTC m=+3483.758257583" observedRunningTime="2026-01-21 16:45:14.261893967 +0000 UTC m=+3484.929663555" watchObservedRunningTime="2026-01-21 16:45:14.264067551 +0000 UTC m=+3484.931837129" Jan 21 16:45:17 crc kubenswrapper[4760]: I0121 16:45:17.196714 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:17 crc kubenswrapper[4760]: I0121 16:45:17.197234 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:18 crc kubenswrapper[4760]: I0121 16:45:18.243722 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56ms9" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" probeResult="failure" output=< Jan 21 16:45:18 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 16:45:18 crc kubenswrapper[4760]: > Jan 21 16:45:27 crc kubenswrapper[4760]: I0121 16:45:27.241489 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:27 crc kubenswrapper[4760]: I0121 16:45:27.295568 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:27 crc kubenswrapper[4760]: I0121 16:45:27.476579 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:28 crc kubenswrapper[4760]: I0121 16:45:28.350607 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56ms9" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" containerID="cri-o://5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" gracePeriod=2 Jan 21 16:45:28 crc kubenswrapper[4760]: I0121 16:45:28.917789 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.077391 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") pod \"71cb570f-5116-460d-925d-f17db909d248\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.077530 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") pod \"71cb570f-5116-460d-925d-f17db909d248\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.077687 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") pod \"71cb570f-5116-460d-925d-f17db909d248\" (UID: \"71cb570f-5116-460d-925d-f17db909d248\") " Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.078546 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities" (OuterVolumeSpecName: "utilities") pod "71cb570f-5116-460d-925d-f17db909d248" (UID: "71cb570f-5116-460d-925d-f17db909d248"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.088622 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd" (OuterVolumeSpecName: "kube-api-access-n47rd") pod "71cb570f-5116-460d-925d-f17db909d248" (UID: "71cb570f-5116-460d-925d-f17db909d248"). InnerVolumeSpecName "kube-api-access-n47rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.179491 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n47rd\" (UniqueName: \"kubernetes.io/projected/71cb570f-5116-460d-925d-f17db909d248-kube-api-access-n47rd\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.179525 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.189005 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71cb570f-5116-460d-925d-f17db909d248" (UID: "71cb570f-5116-460d-925d-f17db909d248"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.281102 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71cb570f-5116-460d-925d-f17db909d248-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362615 4760 generic.go:334] "Generic (PLEG): container finished" podID="71cb570f-5116-460d-925d-f17db909d248" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" exitCode=0 Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d"} Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362694 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56ms9" event={"ID":"71cb570f-5116-460d-925d-f17db909d248","Type":"ContainerDied","Data":"4b0a5128f06bf586631fbf12d4cd8a0b90f5641b8cc45d39ececc5ad842f4fe4"} Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362715 4760 scope.go:117] "RemoveContainer" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.362883 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56ms9" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.396465 4760 scope.go:117] "RemoveContainer" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.408192 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.417651 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56ms9"] Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.424652 4760 scope.go:117] "RemoveContainer" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.460003 4760 scope.go:117] "RemoveContainer" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" Jan 21 16:45:29 crc kubenswrapper[4760]: E0121 16:45:29.460774 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d\": container with ID starting with 5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d not found: ID does not exist" containerID="5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.460823 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d"} err="failed to get container status \"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d\": rpc error: code = NotFound desc = could not find container \"5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d\": container with ID starting with 5a9207db1e465146196a2b09bb0dd7bb92eb514da094d2b50c74c26f99e2511d not found: ID does not exist" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.460857 4760 scope.go:117] "RemoveContainer" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" Jan 21 16:45:29 crc kubenswrapper[4760]: E0121 16:45:29.461363 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b\": container with ID starting with 22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b not found: ID does not exist" containerID="22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.461428 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b"} err="failed to get container status \"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b\": rpc error: code = NotFound desc = could not find container \"22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b\": container with ID starting with 22342e7cb5aaec3f226bf050f89c782c33e63db0c4643d5b822f3c9cf59e149b not found: ID does not exist" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.461463 4760 scope.go:117] "RemoveContainer" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" Jan 21 16:45:29 crc kubenswrapper[4760]: E0121 16:45:29.464736 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e\": container with ID starting with 6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e not found: ID does not exist" containerID="6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.464781 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e"} err="failed to get container status \"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e\": rpc error: code = NotFound desc = could not find container \"6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e\": container with ID starting with 6509189da3a42c468f19515b7b417a0d32d2845024eeba53e9e4b6b492d9025e not found: ID does not exist" Jan 21 16:45:29 crc kubenswrapper[4760]: I0121 16:45:29.632915 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71cb570f-5116-460d-925d-f17db909d248" path="/var/lib/kubelet/pods/71cb570f-5116-460d-925d-f17db909d248/volumes" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.735467 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:33 crc kubenswrapper[4760]: E0121 16:45:33.737395 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-content" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.737492 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-content" Jan 21 16:45:33 crc kubenswrapper[4760]: E0121 16:45:33.738126 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-utilities" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.738216 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="extract-utilities" Jan 21 16:45:33 crc kubenswrapper[4760]: E0121 16:45:33.738301 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.738395 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.738689 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="71cb570f-5116-460d-925d-f17db909d248" containerName="registry-server" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.740211 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.750221 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.899240 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.899393 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:33 crc kubenswrapper[4760]: I0121 16:45:33.899482 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001286 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001387 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001463 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001836 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.001874 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.025956 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"community-operators-rnkrv\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.060630 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:34 crc kubenswrapper[4760]: I0121 16:45:34.747611 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:35 crc kubenswrapper[4760]: I0121 16:45:35.415388 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" exitCode=0 Jan 21 16:45:35 crc kubenswrapper[4760]: I0121 16:45:35.415445 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136"} Jan 21 16:45:35 crc kubenswrapper[4760]: I0121 16:45:35.415991 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerStarted","Data":"1cf9ef1bd94c82a350d0c5a9da6a21dd14c89190c5837cc4b4df7a72171f7555"} Jan 21 16:45:36 crc kubenswrapper[4760]: I0121 16:45:36.425739 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerStarted","Data":"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1"} Jan 21 16:45:37 crc kubenswrapper[4760]: I0121 16:45:37.436840 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" exitCode=0 Jan 21 16:45:37 crc kubenswrapper[4760]: I0121 16:45:37.436899 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1"} Jan 21 16:45:38 crc kubenswrapper[4760]: I0121 16:45:38.448500 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerStarted","Data":"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551"} Jan 21 16:45:38 crc kubenswrapper[4760]: I0121 16:45:38.473221 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rnkrv" podStartSLOduration=3.073411392 podStartE2EDuration="5.473202387s" podCreationTimestamp="2026-01-21 16:45:33 +0000 UTC" firstStartedPulling="2026-01-21 16:45:35.417510331 +0000 UTC m=+3506.085279909" lastFinishedPulling="2026-01-21 16:45:37.817301326 +0000 UTC m=+3508.485070904" observedRunningTime="2026-01-21 16:45:38.463925927 +0000 UTC m=+3509.131695505" watchObservedRunningTime="2026-01-21 16:45:38.473202387 +0000 UTC m=+3509.140971965" Jan 21 16:45:43 crc kubenswrapper[4760]: I0121 16:45:43.141185 4760 scope.go:117] "RemoveContainer" containerID="ef02e145078e842ec9d815a9c5581b8d539b4a39bb6283ec22a7de868f0aab8d" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.060877 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.061224 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.104343 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.548667 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:44 crc kubenswrapper[4760]: I0121 16:45:44.605127 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:46 crc kubenswrapper[4760]: I0121 16:45:46.517646 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rnkrv" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" containerID="cri-o://9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" gracePeriod=2 Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.136395 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.152886 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") pod \"fe90f8c9-68c8-4473-9b92-6d0b820591db\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.153050 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") pod \"fe90f8c9-68c8-4473-9b92-6d0b820591db\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.153205 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") pod \"fe90f8c9-68c8-4473-9b92-6d0b820591db\" (UID: \"fe90f8c9-68c8-4473-9b92-6d0b820591db\") " Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.154080 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities" (OuterVolumeSpecName: "utilities") pod "fe90f8c9-68c8-4473-9b92-6d0b820591db" (UID: "fe90f8c9-68c8-4473-9b92-6d0b820591db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.159713 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr" (OuterVolumeSpecName: "kube-api-access-fddqr") pod "fe90f8c9-68c8-4473-9b92-6d0b820591db" (UID: "fe90f8c9-68c8-4473-9b92-6d0b820591db"). InnerVolumeSpecName "kube-api-access-fddqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.221628 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe90f8c9-68c8-4473-9b92-6d0b820591db" (UID: "fe90f8c9-68c8-4473-9b92-6d0b820591db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.254388 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.254418 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fddqr\" (UniqueName: \"kubernetes.io/projected/fe90f8c9-68c8-4473-9b92-6d0b820591db-kube-api-access-fddqr\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.254432 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe90f8c9-68c8-4473-9b92-6d0b820591db-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.528999 4760 generic.go:334] "Generic (PLEG): container finished" podID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" exitCode=0 Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529048 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551"} Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529087 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rnkrv" event={"ID":"fe90f8c9-68c8-4473-9b92-6d0b820591db","Type":"ContainerDied","Data":"1cf9ef1bd94c82a350d0c5a9da6a21dd14c89190c5837cc4b4df7a72171f7555"} Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529117 4760 scope.go:117] "RemoveContainer" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.529162 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rnkrv" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.556309 4760 scope.go:117] "RemoveContainer" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.565077 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.577604 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rnkrv"] Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.589133 4760 scope.go:117] "RemoveContainer" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.621410 4760 scope.go:117] "RemoveContainer" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" Jan 21 16:45:47 crc kubenswrapper[4760]: E0121 16:45:47.623253 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551\": container with ID starting with 9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551 not found: ID does not exist" containerID="9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.623299 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551"} err="failed to get container status \"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551\": rpc error: code = NotFound desc = could not find container \"9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551\": container with ID starting with 9479596f2e3aa0b0b11b55841e46779c725cc8297bc5c1f35d938dabd45a4551 not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.623335 4760 scope.go:117] "RemoveContainer" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" Jan 21 16:45:47 crc kubenswrapper[4760]: E0121 16:45:47.624034 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1\": container with ID starting with 63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1 not found: ID does not exist" containerID="63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.624078 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1"} err="failed to get container status \"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1\": rpc error: code = NotFound desc = could not find container \"63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1\": container with ID starting with 63905ea55a3ef196fc1b00c6b40a3553b40d138e05c2e7495c785a6c0c53e6d1 not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.624103 4760 scope.go:117] "RemoveContainer" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" Jan 21 16:45:47 crc kubenswrapper[4760]: E0121 16:45:47.624522 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136\": container with ID starting with d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136 not found: ID does not exist" containerID="d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.624549 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136"} err="failed to get container status \"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136\": rpc error: code = NotFound desc = could not find container \"d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136\": container with ID starting with d94f9c09936f6d1074bc8f57d3a51503922c1ba120aaccb20bb9423b04981136 not found: ID does not exist" Jan 21 16:45:47 crc kubenswrapper[4760]: I0121 16:45:47.640776 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" path="/var/lib/kubelet/pods/fe90f8c9-68c8-4473-9b92-6d0b820591db/volumes" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.868194 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:21 crc kubenswrapper[4760]: E0121 16:46:21.869226 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869242 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" Jan 21 16:46:21 crc kubenswrapper[4760]: E0121 16:46:21.869280 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-utilities" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869290 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-utilities" Jan 21 16:46:21 crc kubenswrapper[4760]: E0121 16:46:21.869305 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-content" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869312 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="extract-content" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.869565 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe90f8c9-68c8-4473-9b92-6d0b820591db" containerName="registry-server" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.871200 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.878193 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.912478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.912556 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:21 crc kubenswrapper[4760]: I0121 16:46:21.912595 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.013580 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.013753 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.013813 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.014225 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.014286 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.035918 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"certified-operators-tmv22\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.223968 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.740632 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:22 crc kubenswrapper[4760]: I0121 16:46:22.832690 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerStarted","Data":"0d9328dffa717cbf5d6b82550f25100411ee4cdb400ec36eb9abff1082ac76a3"} Jan 21 16:46:23 crc kubenswrapper[4760]: I0121 16:46:23.848973 4760 generic.go:334] "Generic (PLEG): container finished" podID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" exitCode=0 Jan 21 16:46:23 crc kubenswrapper[4760]: I0121 16:46:23.849062 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d"} Jan 21 16:46:24 crc kubenswrapper[4760]: I0121 16:46:24.859138 4760 generic.go:334] "Generic (PLEG): container finished" podID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" exitCode=0 Jan 21 16:46:24 crc kubenswrapper[4760]: I0121 16:46:24.859232 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a"} Jan 21 16:46:25 crc kubenswrapper[4760]: I0121 16:46:25.870158 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerStarted","Data":"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7"} Jan 21 16:46:25 crc kubenswrapper[4760]: I0121 16:46:25.892928 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tmv22" podStartSLOduration=3.511576957 podStartE2EDuration="4.89290739s" podCreationTimestamp="2026-01-21 16:46:21 +0000 UTC" firstStartedPulling="2026-01-21 16:46:23.851927004 +0000 UTC m=+3554.519696602" lastFinishedPulling="2026-01-21 16:46:25.233257457 +0000 UTC m=+3555.901027035" observedRunningTime="2026-01-21 16:46:25.886384758 +0000 UTC m=+3556.554154336" watchObservedRunningTime="2026-01-21 16:46:25.89290739 +0000 UTC m=+3556.560676968" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.224617 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.225183 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.277882 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:32 crc kubenswrapper[4760]: I0121 16:46:32.980368 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:33 crc kubenswrapper[4760]: I0121 16:46:33.031257 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:34 crc kubenswrapper[4760]: I0121 16:46:34.950021 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tmv22" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" containerID="cri-o://a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" gracePeriod=2 Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.432828 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.525595 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") pod \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.525644 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") pod \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.525680 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") pod \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\" (UID: \"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0\") " Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.527896 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities" (OuterVolumeSpecName: "utilities") pod "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" (UID: "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.543649 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm" (OuterVolumeSpecName: "kube-api-access-hr7gm") pod "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" (UID: "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0"). InnerVolumeSpecName "kube-api-access-hr7gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.578856 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" (UID: "cdd89f43-cfac-4ed0-9475-9d7821e1e2f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.628335 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.628372 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.628387 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr7gm\" (UniqueName: \"kubernetes.io/projected/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0-kube-api-access-hr7gm\") on node \"crc\" DevicePath \"\"" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962768 4760 generic.go:334] "Generic (PLEG): container finished" podID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" exitCode=0 Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962814 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7"} Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962851 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tmv22" event={"ID":"cdd89f43-cfac-4ed0-9475-9d7821e1e2f0","Type":"ContainerDied","Data":"0d9328dffa717cbf5d6b82550f25100411ee4cdb400ec36eb9abff1082ac76a3"} Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962875 4760 scope.go:117] "RemoveContainer" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.962879 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tmv22" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.988582 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.991656 4760 scope.go:117] "RemoveContainer" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" Jan 21 16:46:35 crc kubenswrapper[4760]: I0121 16:46:35.999048 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tmv22"] Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.019471 4760 scope.go:117] "RemoveContainer" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.067541 4760 scope.go:117] "RemoveContainer" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" Jan 21 16:46:36 crc kubenswrapper[4760]: E0121 16:46:36.067909 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7\": container with ID starting with a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7 not found: ID does not exist" containerID="a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.067946 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7"} err="failed to get container status \"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7\": rpc error: code = NotFound desc = could not find container \"a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7\": container with ID starting with a63a6be2f16d2893d33864571a1d4b5e70743a1eac7b54310c16b17caac08ce7 not found: ID does not exist" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.067974 4760 scope.go:117] "RemoveContainer" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" Jan 21 16:46:36 crc kubenswrapper[4760]: E0121 16:46:36.068243 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a\": container with ID starting with 77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a not found: ID does not exist" containerID="77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.068269 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a"} err="failed to get container status \"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a\": rpc error: code = NotFound desc = could not find container \"77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a\": container with ID starting with 77b12e248ee3c610ab3b01e4f46e8598364cfd0afcb31441cd1d4ac180211c6a not found: ID does not exist" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.068287 4760 scope.go:117] "RemoveContainer" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" Jan 21 16:46:36 crc kubenswrapper[4760]: E0121 16:46:36.068583 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d\": container with ID starting with f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d not found: ID does not exist" containerID="f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d" Jan 21 16:46:36 crc kubenswrapper[4760]: I0121 16:46:36.068608 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d"} err="failed to get container status \"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d\": rpc error: code = NotFound desc = could not find container \"f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d\": container with ID starting with f6682a48dd3f63088740786b6aa6a76d6db8b937c6bb96f76e38615352d6e09d not found: ID does not exist" Jan 21 16:46:37 crc kubenswrapper[4760]: I0121 16:46:37.636168 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" path="/var/lib/kubelet/pods/cdd89f43-cfac-4ed0-9475-9d7821e1e2f0/volumes" Jan 21 16:46:50 crc kubenswrapper[4760]: I0121 16:46:50.946604 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:46:50 crc kubenswrapper[4760]: I0121 16:46:50.947126 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:20 crc kubenswrapper[4760]: I0121 16:47:20.946235 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:47:20 crc kubenswrapper[4760]: I0121 16:47:20.946789 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:34 crc kubenswrapper[4760]: I0121 16:47:34.466301 4760 generic.go:334] "Generic (PLEG): container finished" podID="061a538a-0f39-44c0-9c33-e96701ced31e" containerID="07ddae2dc9f99ce4c063d1dc9b89965d4135b8ad24f73e8f85faabfb35ed3463" exitCode=0 Jan 21 16:47:34 crc kubenswrapper[4760]: I0121 16:47:34.466457 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerDied","Data":"07ddae2dc9f99ce4c063d1dc9b89965d4135b8ad24f73e8f85faabfb35ed3463"} Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.853898 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979241 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979404 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979432 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979476 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979514 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979554 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979594 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.979685 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") pod \"061a538a-0f39-44c0-9c33-e96701ced31e\" (UID: \"061a538a-0f39-44c0-9c33-e96701ced31e\") " Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.981150 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.981773 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data" (OuterVolumeSpecName: "config-data") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.986925 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb" (OuterVolumeSpecName: "kube-api-access-bwfvb") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "kube-api-access-bwfvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.987300 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:47:35 crc kubenswrapper[4760]: I0121 16:47:35.990921 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.009429 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.011560 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.015424 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.040510 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "061a538a-0f39-44c0-9c33-e96701ced31e" (UID: "061a538a-0f39-44c0-9c33-e96701ced31e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.081630 4760 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.081953 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082045 4760 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082148 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwfvb\" (UniqueName: \"kubernetes.io/projected/061a538a-0f39-44c0-9c33-e96701ced31e-kube-api-access-bwfvb\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082252 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082382 4760 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/061a538a-0f39-44c0-9c33-e96701ced31e-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082485 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082565 4760 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.082657 4760 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/061a538a-0f39-44c0-9c33-e96701ced31e-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.107858 4760 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.185181 4760 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.491166 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"061a538a-0f39-44c0-9c33-e96701ced31e","Type":"ContainerDied","Data":"d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df"} Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.491216 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 16:47:36 crc kubenswrapper[4760]: I0121 16:47:36.491230 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21f2f4cbc861872e72cfdc47fc11bbeccdac42c2fa0bb1da4540201ec4b28df" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.545034 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546170 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-utilities" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546189 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-utilities" Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546224 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546233 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546253 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-content" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546262 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="extract-content" Jan 21 16:47:45 crc kubenswrapper[4760]: E0121 16:47:45.546280 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546288 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546608 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd89f43-cfac-4ed0-9475-9d7821e1e2f0" containerName="registry-server" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.546630 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="061a538a-0f39-44c0-9c33-e96701ced31e" containerName="tempest-tests-tempest-tests-runner" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.547452 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.549908 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tg5qr" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.560149 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.664338 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.664424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2h4d\" (UniqueName: \"kubernetes.io/projected/e410b884-0dde-488f-8d8b-b60494f285d5-kube-api-access-r2h4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.767156 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.767642 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2h4d\" (UniqueName: \"kubernetes.io/projected/e410b884-0dde-488f-8d8b-b60494f285d5-kube-api-access-r2h4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.767914 4760 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.789922 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2h4d\" (UniqueName: \"kubernetes.io/projected/e410b884-0dde-488f-8d8b-b60494f285d5-kube-api-access-r2h4d\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.794747 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e410b884-0dde-488f-8d8b-b60494f285d5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:45 crc kubenswrapper[4760]: I0121 16:47:45.872148 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 16:47:46 crc kubenswrapper[4760]: I0121 16:47:46.339981 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 16:47:46 crc kubenswrapper[4760]: I0121 16:47:46.578970 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e410b884-0dde-488f-8d8b-b60494f285d5","Type":"ContainerStarted","Data":"e5a7f54ad9004273552512615ee3c1c737d96364af129da7c4ac197df9ea9653"} Jan 21 16:47:47 crc kubenswrapper[4760]: I0121 16:47:47.592164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e410b884-0dde-488f-8d8b-b60494f285d5","Type":"ContainerStarted","Data":"d452347e2bdfcfb16ba6644e55e14e3680bde90873b89de2b3cfd87749cfc803"} Jan 21 16:47:47 crc kubenswrapper[4760]: I0121 16:47:47.611705 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.821783897 podStartE2EDuration="2.611677797s" podCreationTimestamp="2026-01-21 16:47:45 +0000 UTC" firstStartedPulling="2026-01-21 16:47:46.343285355 +0000 UTC m=+3637.011054933" lastFinishedPulling="2026-01-21 16:47:47.133179255 +0000 UTC m=+3637.800948833" observedRunningTime="2026-01-21 16:47:47.607943013 +0000 UTC m=+3638.275712601" watchObservedRunningTime="2026-01-21 16:47:47.611677797 +0000 UTC m=+3638.279447375" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.946754 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.947343 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.947418 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.948216 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:47:50 crc kubenswrapper[4760]: I0121 16:47:50.948306 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce" gracePeriod=600 Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.630134 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce" exitCode=0 Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.632689 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce"} Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.632761 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83"} Jan 21 16:47:51 crc kubenswrapper[4760]: I0121 16:47:51.632782 4760 scope.go:117] "RemoveContainer" containerID="55dee4eb9868f3c9bf1409cba7a3551fe0e1a068bf7fcdeaf70de656dea0180f" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.747693 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.750618 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.758555 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x7hmn"/"kube-root-ca.crt" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.758780 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x7hmn"/"openshift-service-ca.crt" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.759378 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x7hmn"/"default-dockercfg-5tz6g" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.760904 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.826009 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.826074 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.927972 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.928042 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.928882 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:09 crc kubenswrapper[4760]: I0121 16:48:09.948095 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"must-gather-xv7ts\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:10 crc kubenswrapper[4760]: I0121 16:48:10.071102 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:48:10 crc kubenswrapper[4760]: I0121 16:48:10.363046 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:48:10 crc kubenswrapper[4760]: I0121 16:48:10.805350 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerStarted","Data":"064d00ee4b9b29489608e81fac44ff42dcc3ee165feb8b354f00902d275ae880"} Jan 21 16:48:16 crc kubenswrapper[4760]: I0121 16:48:16.858792 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerStarted","Data":"dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82"} Jan 21 16:48:17 crc kubenswrapper[4760]: I0121 16:48:17.868615 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerStarted","Data":"763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a"} Jan 21 16:48:17 crc kubenswrapper[4760]: I0121 16:48:17.888697 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" podStartSLOduration=2.825591535 podStartE2EDuration="8.888676056s" podCreationTimestamp="2026-01-21 16:48:09 +0000 UTC" firstStartedPulling="2026-01-21 16:48:10.385584517 +0000 UTC m=+3661.053354095" lastFinishedPulling="2026-01-21 16:48:16.448669038 +0000 UTC m=+3667.116438616" observedRunningTime="2026-01-21 16:48:17.882898891 +0000 UTC m=+3668.550668469" watchObservedRunningTime="2026-01-21 16:48:17.888676056 +0000 UTC m=+3668.556445634" Jan 21 16:48:19 crc kubenswrapper[4760]: E0121 16:48:19.791694 4760 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.65:53436->38.129.56.65:33639: write tcp 38.129.56.65:53436->38.129.56.65:33639: write: broken pipe Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.730818 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-kmgfx"] Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.732684 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.812749 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.813213 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.915360 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.915590 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.915780 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:20 crc kubenswrapper[4760]: I0121 16:48:20.939041 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"crc-debug-kmgfx\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:21 crc kubenswrapper[4760]: I0121 16:48:21.051861 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:48:21 crc kubenswrapper[4760]: W0121 16:48:21.106161 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0932ee92_0962_4a03_a492_a3185d11c7eb.slice/crio-73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd WatchSource:0}: Error finding container 73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd: Status 404 returned error can't find the container with id 73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd Jan 21 16:48:21 crc kubenswrapper[4760]: I0121 16:48:21.902259 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" event={"ID":"0932ee92-0962-4a03-a492-a3185d11c7eb","Type":"ContainerStarted","Data":"73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd"} Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.562404 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.574112 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.606831 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.612561 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.637920 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.642952 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.696735 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f_f4ba3e4f-146a-4af6-885a-877760c90ce0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.745591 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-central-agent/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.830312 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-notification-agent/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.858391 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/sg-core/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.876764 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/proxy-httpd/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.894919 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api-log/0.log" Jan 21 16:48:23 crc kubenswrapper[4760]: I0121 16:48:23.955268 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.000098 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/cinder-scheduler/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.036990 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/probe/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.079565 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-c7vns_8adc5733-eeac-4148-878a-61b908f0a85b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.103182 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq_cd8384f1-8b63-421a-b279-ae67ba25c2d2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.150689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/dnsmasq-dns/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.159423 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/init/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.216756 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sd829_2ffc46c3-eeae-4b68-bede-4c1e5af6fe46/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.234666 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-log/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.264833 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-httpd/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.279651 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-log/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.327888 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-httpd/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.690258 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon-log/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.828262 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.834865 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/1.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.854062 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l_81b15839-b904-442b-bd7a-f42a043a7be6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:24 crc kubenswrapper[4760]: I0121 16:48:24.882798 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lcwmb_d89a08a9-deb3-4c27-ab2e-4fab854717cc/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:25 crc kubenswrapper[4760]: I0121 16:48:25.087469 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b497869f9-hs8kf_42613e5a-e22d-4358-8cd2-1ebfd1a42b55/keystone-api/0.log" Jan 21 16:48:25 crc kubenswrapper[4760]: I0121 16:48:25.098443 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f0d87473-0ca7-46b5-a57f-611e3014ab77/kube-state-metrics/0.log" Jan 21 16:48:25 crc kubenswrapper[4760]: I0121 16:48:25.141762 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6f958_60b03623-4db5-445f-89b4-61f39ac04dc2/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:36 crc kubenswrapper[4760]: I0121 16:48:36.126282 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" event={"ID":"0932ee92-0962-4a03-a492-a3185d11c7eb","Type":"ContainerStarted","Data":"5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb"} Jan 21 16:48:36 crc kubenswrapper[4760]: I0121 16:48:36.150959 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" podStartSLOduration=1.930305262 podStartE2EDuration="16.15093904s" podCreationTimestamp="2026-01-21 16:48:20 +0000 UTC" firstStartedPulling="2026-01-21 16:48:21.108625241 +0000 UTC m=+3671.776394819" lastFinishedPulling="2026-01-21 16:48:35.329259019 +0000 UTC m=+3685.997028597" observedRunningTime="2026-01-21 16:48:36.141200495 +0000 UTC m=+3686.808970073" watchObservedRunningTime="2026-01-21 16:48:36.15093904 +0000 UTC m=+3686.818708638" Jan 21 16:48:43 crc kubenswrapper[4760]: I0121 16:48:43.742389 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_06184570-059b-4132-a5b6-365e3e12e383/memcached/0.log" Jan 21 16:48:43 crc kubenswrapper[4760]: I0121 16:48:43.930446 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-api/0.log" Jan 21 16:48:43 crc kubenswrapper[4760]: I0121 16:48:43.992865 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-httpd/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.019311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2_93a8f498-bf0c-43f6-aad8-e26843ca3295/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.240012 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-log/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.593281 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-api/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.713316 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_56d015a2-9a67-4f44-a726-21949444f11b/nova-cell0-conductor-conductor/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.785221 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5bc3a5b4-ab7d-4215-bd61-ce6c206856ae/nova-cell1-conductor-conductor/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.847375 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7a3e9e72-ecf6-406f-ab2b-02804c7f23e5/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 16:48:44 crc kubenswrapper[4760]: I0121 16:48:44.911257 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tqhjb_5a4de6cd-9a26-49b4-a3f7-eb743b8830b1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.012339 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-log/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.050654 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.063558 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 16:48:45 crc kubenswrapper[4760]: I0121 16:48:45.096239 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.475616 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-metadata/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.629910 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_582a5834-a028-489f-943f-8928d5d9f26c/nova-scheduler-scheduler/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.662175 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/galera/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.680090 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/mysql-bootstrap/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.710232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/galera/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.723871 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/mysql-bootstrap/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.732101 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8e6f14c6-f759-439a-9ea1-63a88e650f89/openstackclient/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.752831 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ltr79_c17cd40e-6e7b-4c1e-9ca8-e6edc1248330/ovn-controller/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.766927 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sz9bq_0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc/openstack-network-exporter/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.785865 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.805580 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovs-vswitchd/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.814402 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server-init/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.862362 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pv9jf_fee344d1-5ba0-4b85-85bf-8133d451624e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.875358 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/ovn-northd/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.884729 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/openstack-network-exporter/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.907741 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/ovsdbserver-nb/0.log" Jan 21 16:48:46 crc kubenswrapper[4760]: I0121 16:48:46.915097 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/openstack-network-exporter/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.010106 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/ovsdbserver-sb/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.019408 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/openstack-network-exporter/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.127114 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-log/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.189854 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-api/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.339845 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/rabbitmq/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.353996 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/setup-container/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.380030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/rabbitmq/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.385232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/setup-container/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.469601 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.478006 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.523239 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.529978 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.531847 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq_72a45862-35fa-4414-83d0-3e20bf784780/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.686902 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.693216 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8jj42_07be8207-721d-4d0a-bada-ac8b6c54c3ce/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.701709 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.706262 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg_c223d637-a759-4b7a-9eca-d4aa22707301/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.710972 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.718185 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.729722 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5hkb_e0d57ee5-e43e-4edf-bbb1-1429b366bfac/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.739642 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 16:48:47 crc kubenswrapper[4760]: I0121 16:48:47.778260 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2hb8p_28bf7889-c488-4d87-8b69-e477b27a7909/ssh-known-hosts-edpm-deployment/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.057564 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.088762 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.180199 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-httpd/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.251773 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.266814 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vscfw_c41049e0-0ea2-4944-a23b-739987c73dce/swift-ring-rebalance/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.303980 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.346852 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-replicator/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.353207 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-auditor/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.359713 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-reaper/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.367282 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.517518 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-replicator/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.530461 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-auditor/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.559200 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-updater/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.565169 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-server/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.599071 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-replicator/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.620247 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-auditor/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.629111 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-updater/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.639931 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-expirer/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.680498 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/rsync/0.log" Jan 21 16:48:48 crc kubenswrapper[4760]: I0121 16:48:48.689412 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/swift-recon-cron/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.192612 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hblbg_bb09237a-f1eb-4d14-894f-ac460ce3b7c3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.207555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.216760 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.222733 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_061a538a-0f39-44c0-9c33-e96701ced31e/tempest-tests-tempest-tests-runner/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.229729 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e410b884-0dde-488f-8d8b-b60494f285d5/test-operator-logs-container/0.log" Jan 21 16:48:49 crc kubenswrapper[4760]: I0121 16:48:49.334235 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-csfth_9b589bc2-f08a-4319-a56e-145673e19eee/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.435644 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.481200 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.494345 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.504996 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.516303 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.530635 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.627114 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.637833 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.664487 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.917575 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.929669 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 16:48:51 crc kubenswrapper[4760]: I0121 16:48:51.999578 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.027159 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.141519 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.220169 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.294522 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.306658 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.340992 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 16:48:52 crc kubenswrapper[4760]: I0121 16:48:52.483869 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 16:48:53 crc kubenswrapper[4760]: I0121 16:48:53.991072 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.002300 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.069395 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.104862 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.127245 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.156787 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.217410 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.230734 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 16:48:54 crc kubenswrapper[4760]: I0121 16:48:54.241886 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 16:48:58 crc kubenswrapper[4760]: I0121 16:48:58.914652 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 16:48:58 crc kubenswrapper[4760]: I0121 16:48:58.939760 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 16:48:58 crc kubenswrapper[4760]: I0121 16:48:58.951062 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 16:49:17 crc kubenswrapper[4760]: I0121 16:49:17.589127 4760 generic.go:334] "Generic (PLEG): container finished" podID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerID="5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb" exitCode=0 Jan 21 16:49:17 crc kubenswrapper[4760]: I0121 16:49:17.589201 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" event={"ID":"0932ee92-0962-4a03-a492-a3185d11c7eb","Type":"ContainerDied","Data":"5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb"} Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.700951 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.741494 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-kmgfx"] Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.753190 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-kmgfx"] Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.864888 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") pod \"0932ee92-0962-4a03-a492-a3185d11c7eb\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.865066 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") pod \"0932ee92-0962-4a03-a492-a3185d11c7eb\" (UID: \"0932ee92-0962-4a03-a492-a3185d11c7eb\") " Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.865196 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host" (OuterVolumeSpecName: "host") pod "0932ee92-0962-4a03-a492-a3185d11c7eb" (UID: "0932ee92-0962-4a03-a492-a3185d11c7eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.865679 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0932ee92-0962-4a03-a492-a3185d11c7eb-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.873740 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h" (OuterVolumeSpecName: "kube-api-access-hsk5h") pod "0932ee92-0962-4a03-a492-a3185d11c7eb" (UID: "0932ee92-0962-4a03-a492-a3185d11c7eb"). InnerVolumeSpecName "kube-api-access-hsk5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:18 crc kubenswrapper[4760]: I0121 16:49:18.967720 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsk5h\" (UniqueName: \"kubernetes.io/projected/0932ee92-0962-4a03-a492-a3185d11c7eb-kube-api-access-hsk5h\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:19 crc kubenswrapper[4760]: I0121 16:49:19.607551 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73e2bff43eac3e6c369346ed610db2ea185afebd6db8cc8430c18d3eafea5fdd" Jan 21 16:49:19 crc kubenswrapper[4760]: I0121 16:49:19.607611 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-kmgfx" Jan 21 16:49:19 crc kubenswrapper[4760]: I0121 16:49:19.637616 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" path="/var/lib/kubelet/pods/0932ee92-0962-4a03-a492-a3185d11c7eb/volumes" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.056760 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-w5fjg"] Jan 21 16:49:20 crc kubenswrapper[4760]: E0121 16:49:20.057120 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerName="container-00" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.057132 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerName="container-00" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.057307 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="0932ee92-0962-4a03-a492-a3185d11c7eb" containerName="container-00" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.057889 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.190215 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.190478 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.292176 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.292310 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.292460 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.309914 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"crc-debug-w5fjg\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.377060 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:20 crc kubenswrapper[4760]: I0121 16:49:20.616663 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" event={"ID":"a4306708-82b2-4ddd-a75b-f7616f1056f3","Type":"ContainerStarted","Data":"829ee346d4edd131a993d1e9e2b50d5a478f20cf50d769ae3bddca1be7e0eb0f"} Jan 21 16:49:20 crc kubenswrapper[4760]: E0121 16:49:20.943500 4760 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4306708_82b2_4ddd_a75b_f7616f1056f3.slice/crio-conmon-11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719.scope\": RecentStats: unable to find data in memory cache]" Jan 21 16:49:21 crc kubenswrapper[4760]: I0121 16:49:21.625134 4760 generic.go:334] "Generic (PLEG): container finished" podID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerID="11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719" exitCode=0 Jan 21 16:49:21 crc kubenswrapper[4760]: I0121 16:49:21.632550 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" event={"ID":"a4306708-82b2-4ddd-a75b-f7616f1056f3","Type":"ContainerDied","Data":"11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719"} Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.114041 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-w5fjg"] Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.127558 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-w5fjg"] Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.751773 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.894708 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") pod \"a4306708-82b2-4ddd-a75b-f7616f1056f3\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.894821 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host" (OuterVolumeSpecName: "host") pod "a4306708-82b2-4ddd-a75b-f7616f1056f3" (UID: "a4306708-82b2-4ddd-a75b-f7616f1056f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.894853 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") pod \"a4306708-82b2-4ddd-a75b-f7616f1056f3\" (UID: \"a4306708-82b2-4ddd-a75b-f7616f1056f3\") " Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.895594 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4306708-82b2-4ddd-a75b-f7616f1056f3-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.900402 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f" (OuterVolumeSpecName: "kube-api-access-tg74f") pod "a4306708-82b2-4ddd-a75b-f7616f1056f3" (UID: "a4306708-82b2-4ddd-a75b-f7616f1056f3"). InnerVolumeSpecName "kube-api-access-tg74f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:22 crc kubenswrapper[4760]: I0121 16:49:22.997509 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg74f\" (UniqueName: \"kubernetes.io/projected/a4306708-82b2-4ddd-a75b-f7616f1056f3-kube-api-access-tg74f\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.302382 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-jjf8r"] Jan 21 16:49:23 crc kubenswrapper[4760]: E0121 16:49:23.302763 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerName="container-00" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.302775 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerName="container-00" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.302975 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" containerName="container-00" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.303743 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.404737 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.405110 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.506792 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.506882 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.507079 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.523818 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"crc-debug-jjf8r\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.621796 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.633755 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4306708-82b2-4ddd-a75b-f7616f1056f3" path="/var/lib/kubelet/pods/a4306708-82b2-4ddd-a75b-f7616f1056f3/volumes" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.643746 4760 scope.go:117] "RemoveContainer" containerID="11c316eff07dd8c0fed9ebee55890fe918a392eed1fd0dbbaea20ca32f378719" Jan 21 16:49:23 crc kubenswrapper[4760]: I0121 16:49:23.643929 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-w5fjg" Jan 21 16:49:23 crc kubenswrapper[4760]: W0121 16:49:23.652436 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc867c3d5_e820_4502_9dbd_57f916c85f07.slice/crio-7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5 WatchSource:0}: Error finding container 7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5: Status 404 returned error can't find the container with id 7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5 Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.661488 4760 generic.go:334] "Generic (PLEG): container finished" podID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerID="0a3f60f3de7f1779a02534ad04ff5e44c8a3d1440526488559b55782f4d0b9ed" exitCode=0 Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.661573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" event={"ID":"c867c3d5-e820-4502-9dbd-57f916c85f07","Type":"ContainerDied","Data":"0a3f60f3de7f1779a02534ad04ff5e44c8a3d1440526488559b55782f4d0b9ed"} Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.661841 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" event={"ID":"c867c3d5-e820-4502-9dbd-57f916c85f07","Type":"ContainerStarted","Data":"7c4e6a3b2fef1360cf0098586dec25508a83f1b40e10b4745a7b68c136df23e5"} Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.703508 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-jjf8r"] Jan 21 16:49:24 crc kubenswrapper[4760]: I0121 16:49:24.711006 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/crc-debug-jjf8r"] Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.228782 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.274018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.292061 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.784029 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.961181 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") pod \"c867c3d5-e820-4502-9dbd-57f916c85f07\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.961937 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") pod \"c867c3d5-e820-4502-9dbd-57f916c85f07\" (UID: \"c867c3d5-e820-4502-9dbd-57f916c85f07\") " Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.962228 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host" (OuterVolumeSpecName: "host") pod "c867c3d5-e820-4502-9dbd-57f916c85f07" (UID: "c867c3d5-e820-4502-9dbd-57f916c85f07"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.962994 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c867c3d5-e820-4502-9dbd-57f916c85f07-host\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:25 crc kubenswrapper[4760]: I0121 16:49:25.969012 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7" (OuterVolumeSpecName: "kube-api-access-dkjx7") pod "c867c3d5-e820-4502-9dbd-57f916c85f07" (UID: "c867c3d5-e820-4502-9dbd-57f916c85f07"). InnerVolumeSpecName "kube-api-access-dkjx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:49:26 crc kubenswrapper[4760]: I0121 16:49:26.064644 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkjx7\" (UniqueName: \"kubernetes.io/projected/c867c3d5-e820-4502-9dbd-57f916c85f07-kube-api-access-dkjx7\") on node \"crc\" DevicePath \"\"" Jan 21 16:49:26 crc kubenswrapper[4760]: I0121 16:49:26.679869 4760 scope.go:117] "RemoveContainer" containerID="0a3f60f3de7f1779a02534ad04ff5e44c8a3d1440526488559b55782f4d0b9ed" Jan 21 16:49:26 crc kubenswrapper[4760]: I0121 16:49:26.680041 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/crc-debug-jjf8r" Jan 21 16:49:27 crc kubenswrapper[4760]: I0121 16:49:27.632000 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" path="/var/lib/kubelet/pods/c867c3d5-e820-4502-9dbd-57f916c85f07/volumes" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.926643 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.950376 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.962626 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.972580 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 16:49:30 crc kubenswrapper[4760]: I0121 16:49:30.988990 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 16:49:31 crc kubenswrapper[4760]: I0121 16:49:31.005306 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 16:49:43 crc kubenswrapper[4760]: I0121 16:49:43.536287 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 16:49:43 crc kubenswrapper[4760]: I0121 16:49:43.542618 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 16:49:43 crc kubenswrapper[4760]: I0121 16:49:43.570908 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.877300 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.895743 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.900965 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.910838 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.918632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.933510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.942575 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.951841 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.963531 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 16:49:44 crc kubenswrapper[4760]: I0121 16:49:44.993435 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 16:49:45 crc kubenswrapper[4760]: I0121 16:49:45.006637 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 16:49:45 crc kubenswrapper[4760]: I0121 16:49:45.358006 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 16:49:45 crc kubenswrapper[4760]: I0121 16:49:45.365022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.382530 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/extract/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.397310 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/util/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.407033 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/pull/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.419315 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/extract/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.427694 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/util/0.log" Jan 21 16:49:49 crc kubenswrapper[4760]: I0121 16:49:49.438971 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/pull/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.442053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/registry-server/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.448392 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-utilities/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.459500 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-content/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.906025 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/registry-server/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.917487 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-utilities/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.934776 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-content/0.log" Jan 21 16:49:50 crc kubenswrapper[4760]: I0121 16:49:50.953949 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lhqrl_a848eafc-6251-4b18-94fd-dddb46db86ca/marketplace-operator/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.105897 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/registry-server/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.111618 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-utilities/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.118300 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-content/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.700901 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/registry-server/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.706510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-utilities/0.log" Jan 21 16:49:51 crc kubenswrapper[4760]: I0121 16:49:51.717349 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-content/0.log" Jan 21 16:49:53 crc kubenswrapper[4760]: I0121 16:49:53.650423 4760 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda4306708-82b2-4ddd-a75b-f7616f1056f3"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda4306708-82b2-4ddd-a75b-f7616f1056f3] : Timed out while waiting for systemd to remove kubepods-besteffort-poda4306708_82b2_4ddd_a75b_f7616f1056f3.slice" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.821366 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:49:59 crc kubenswrapper[4760]: E0121 16:49:59.822403 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerName="container-00" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.822418 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerName="container-00" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.822659 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="c867c3d5-e820-4502-9dbd-57f916c85f07" containerName="container-00" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.824369 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.837385 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.921223 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.921346 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:49:59 crc kubenswrapper[4760]: I0121 16:49:59.921424 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.023864 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.024036 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.024122 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.024719 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.025044 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.056183 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"redhat-marketplace-8z4hb\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.155044 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:00 crc kubenswrapper[4760]: I0121 16:50:00.705228 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:01 crc kubenswrapper[4760]: I0121 16:50:01.153748 4760 generic.go:334] "Generic (PLEG): container finished" podID="d70163da-060a-47bf-b993-0f755a0e7018" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" exitCode=0 Jan 21 16:50:01 crc kubenswrapper[4760]: I0121 16:50:01.153843 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c"} Jan 21 16:50:01 crc kubenswrapper[4760]: I0121 16:50:01.154111 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerStarted","Data":"8321d91e82c4bc6142bef0b46a681d43d2e9c308158222e17f395d5f0e81a6f9"} Jan 21 16:50:03 crc kubenswrapper[4760]: I0121 16:50:03.171828 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerStarted","Data":"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c"} Jan 21 16:50:04 crc kubenswrapper[4760]: I0121 16:50:04.197208 4760 generic.go:334] "Generic (PLEG): container finished" podID="d70163da-060a-47bf-b993-0f755a0e7018" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" exitCode=0 Jan 21 16:50:04 crc kubenswrapper[4760]: I0121 16:50:04.197318 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c"} Jan 21 16:50:06 crc kubenswrapper[4760]: I0121 16:50:06.224937 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerStarted","Data":"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5"} Jan 21 16:50:06 crc kubenswrapper[4760]: I0121 16:50:06.247974 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8z4hb" podStartSLOduration=2.734373334 podStartE2EDuration="7.247951717s" podCreationTimestamp="2026-01-21 16:49:59 +0000 UTC" firstStartedPulling="2026-01-21 16:50:01.155621586 +0000 UTC m=+3771.823391164" lastFinishedPulling="2026-01-21 16:50:05.669199969 +0000 UTC m=+3776.336969547" observedRunningTime="2026-01-21 16:50:06.241950186 +0000 UTC m=+3776.909719784" watchObservedRunningTime="2026-01-21 16:50:06.247951717 +0000 UTC m=+3776.915721295" Jan 21 16:50:10 crc kubenswrapper[4760]: I0121 16:50:10.156062 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:10 crc kubenswrapper[4760]: I0121 16:50:10.156676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:10 crc kubenswrapper[4760]: I0121 16:50:10.214484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.211892 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.267063 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.336164 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8z4hb" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" containerID="cri-o://db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" gracePeriod=2 Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.764225 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.870830 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") pod \"d70163da-060a-47bf-b993-0f755a0e7018\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.870890 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") pod \"d70163da-060a-47bf-b993-0f755a0e7018\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.871025 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") pod \"d70163da-060a-47bf-b993-0f755a0e7018\" (UID: \"d70163da-060a-47bf-b993-0f755a0e7018\") " Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.871813 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities" (OuterVolumeSpecName: "utilities") pod "d70163da-060a-47bf-b993-0f755a0e7018" (UID: "d70163da-060a-47bf-b993-0f755a0e7018"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.878584 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq" (OuterVolumeSpecName: "kube-api-access-jglcq") pod "d70163da-060a-47bf-b993-0f755a0e7018" (UID: "d70163da-060a-47bf-b993-0f755a0e7018"). InnerVolumeSpecName "kube-api-access-jglcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.899668 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d70163da-060a-47bf-b993-0f755a0e7018" (UID: "d70163da-060a-47bf-b993-0f755a0e7018"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.946816 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.946896 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.973870 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.973920 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jglcq\" (UniqueName: \"kubernetes.io/projected/d70163da-060a-47bf-b993-0f755a0e7018-kube-api-access-jglcq\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:20 crc kubenswrapper[4760]: I0121 16:50:20.973936 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d70163da-060a-47bf-b993-0f755a0e7018-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345822 4760 generic.go:334] "Generic (PLEG): container finished" podID="d70163da-060a-47bf-b993-0f755a0e7018" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" exitCode=0 Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345869 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5"} Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345900 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8z4hb" event={"ID":"d70163da-060a-47bf-b993-0f755a0e7018","Type":"ContainerDied","Data":"8321d91e82c4bc6142bef0b46a681d43d2e9c308158222e17f395d5f0e81a6f9"} Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345919 4760 scope.go:117] "RemoveContainer" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.345948 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8z4hb" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.369806 4760 scope.go:117] "RemoveContainer" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.384036 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.393704 4760 scope.go:117] "RemoveContainer" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.395285 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8z4hb"] Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.456424 4760 scope.go:117] "RemoveContainer" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" Jan 21 16:50:21 crc kubenswrapper[4760]: E0121 16:50:21.456862 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5\": container with ID starting with db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5 not found: ID does not exist" containerID="db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.456899 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5"} err="failed to get container status \"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5\": rpc error: code = NotFound desc = could not find container \"db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5\": container with ID starting with db3f276487ff51d2d07450c5f1e4d4adb42df73c97e0851d76c693bfa66d9fe5 not found: ID does not exist" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.456924 4760 scope.go:117] "RemoveContainer" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" Jan 21 16:50:21 crc kubenswrapper[4760]: E0121 16:50:21.457170 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c\": container with ID starting with bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c not found: ID does not exist" containerID="bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.457195 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c"} err="failed to get container status \"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c\": rpc error: code = NotFound desc = could not find container \"bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c\": container with ID starting with bf64b901057f1dd446c8baa2559e7aeff76341f338fa422ca2fd74114596c26c not found: ID does not exist" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.457213 4760 scope.go:117] "RemoveContainer" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" Jan 21 16:50:21 crc kubenswrapper[4760]: E0121 16:50:21.457511 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c\": container with ID starting with 2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c not found: ID does not exist" containerID="2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.457543 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c"} err="failed to get container status \"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c\": rpc error: code = NotFound desc = could not find container \"2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c\": container with ID starting with 2e740981d88f40c617092780e4cb67aa4a7ef8366d157728d2f44a5ce8ec306c not found: ID does not exist" Jan 21 16:50:21 crc kubenswrapper[4760]: I0121 16:50:21.635444 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70163da-060a-47bf-b993-0f755a0e7018" path="/var/lib/kubelet/pods/d70163da-060a-47bf-b993-0f755a0e7018/volumes" Jan 21 16:50:50 crc kubenswrapper[4760]: I0121 16:50:50.946404 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:50:50 crc kubenswrapper[4760]: I0121 16:50:50.946957 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.060047 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.067019 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.086215 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.288792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.304402 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 16:51:05 crc kubenswrapper[4760]: I0121 16:51:05.315196 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.440880 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.514588 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.523698 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.532371 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.535908 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.539961 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.548479 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.554769 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.557100 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.560730 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.564967 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.566196 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.572993 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.580259 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.596110 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.630238 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.642420 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.650780 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.667881 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 16:51:06 crc kubenswrapper[4760]: I0121 16:51:06.709439 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.077778 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.094307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.106030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.115246 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.158555 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.172101 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.222083 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.276632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.357492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.369319 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.385395 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 16:51:07 crc kubenswrapper[4760]: I0121 16:51:07.519022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.756559 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.785584 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.862548 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.872030 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.891476 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.894297 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.908701 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.921808 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 16:51:08 crc kubenswrapper[4760]: I0121 16:51:08.953792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.019627 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.029287 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.042894 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.468277 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.482947 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 16:51:09 crc kubenswrapper[4760]: I0121 16:51:09.611474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.314503 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.331282 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.339553 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.350194 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.360065 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.375244 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.481066 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.530872 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.543313 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.553445 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.571710 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.585084 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.663018 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.673311 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.698317 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.946099 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 16:51:10 crc kubenswrapper[4760]: I0121 16:51:10.957241 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.041336 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.057816 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.104848 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.154605 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.257689 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.268528 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.287942 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 16:51:11 crc kubenswrapper[4760]: I0121 16:51:11.390607 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.616474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.628337 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.693381 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.718725 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.738028 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.770171 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.841563 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.853002 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 16:51:12 crc kubenswrapper[4760]: I0121 16:51:12.862537 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.361898 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/kube-multus-additional-cni-plugins/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.369820 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/egress-router-binary-copy/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.378036 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/cni-plugins/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.384602 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/bond-cni-plugin/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.392704 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/routeoverride-cni/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.397607 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni-bincopy/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.407881 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.437155 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/multus-admission-controller/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.442839 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/kube-rbac-proxy/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.511562 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.589535 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/3.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.618401 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/network-metrics-daemon/0.log" Jan 21 16:51:14 crc kubenswrapper[4760]: I0121 16:51:14.623950 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/kube-rbac-proxy/0.log" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.946347 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.946888 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.946939 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.948058 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:51:20 crc kubenswrapper[4760]: I0121 16:51:20.948459 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" gracePeriod=600 Jan 21 16:51:21 crc kubenswrapper[4760]: E0121 16:51:21.083354 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.889520 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" exitCode=0 Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.889573 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83"} Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.889612 4760 scope.go:117] "RemoveContainer" containerID="696842cd662b21910865ccb4754bbe4d3b79853866b2d5f9cf45005d461a53ce" Jan 21 16:51:21 crc kubenswrapper[4760]: I0121 16:51:21.890143 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:21 crc kubenswrapper[4760]: E0121 16:51:21.892398 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:33 crc kubenswrapper[4760]: I0121 16:51:33.622965 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:33 crc kubenswrapper[4760]: E0121 16:51:33.623804 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:44 crc kubenswrapper[4760]: I0121 16:51:44.622141 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:44 crc kubenswrapper[4760]: E0121 16:51:44.622924 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:51:59 crc kubenswrapper[4760]: I0121 16:51:59.633951 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:51:59 crc kubenswrapper[4760]: E0121 16:51:59.634850 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:12 crc kubenswrapper[4760]: I0121 16:52:12.623243 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:12 crc kubenswrapper[4760]: E0121 16:52:12.624241 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:26 crc kubenswrapper[4760]: I0121 16:52:26.622688 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:26 crc kubenswrapper[4760]: E0121 16:52:26.623493 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:37 crc kubenswrapper[4760]: I0121 16:52:37.623214 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:37 crc kubenswrapper[4760]: E0121 16:52:37.624109 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:52:51 crc kubenswrapper[4760]: I0121 16:52:51.623075 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:52:51 crc kubenswrapper[4760]: E0121 16:52:51.624006 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:03 crc kubenswrapper[4760]: I0121 16:53:03.623130 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:03 crc kubenswrapper[4760]: E0121 16:53:03.623913 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:16 crc kubenswrapper[4760]: I0121 16:53:16.623100 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:16 crc kubenswrapper[4760]: E0121 16:53:16.623987 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:28 crc kubenswrapper[4760]: I0121 16:53:28.623149 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:28 crc kubenswrapper[4760]: E0121 16:53:28.623897 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:39 crc kubenswrapper[4760]: I0121 16:53:39.629180 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:39 crc kubenswrapper[4760]: E0121 16:53:39.630037 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:53:53 crc kubenswrapper[4760]: I0121 16:53:53.623860 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:53:53 crc kubenswrapper[4760]: E0121 16:53:53.624620 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:04 crc kubenswrapper[4760]: I0121 16:54:04.622342 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:04 crc kubenswrapper[4760]: E0121 16:54:04.623252 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:16 crc kubenswrapper[4760]: I0121 16:54:16.623111 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:16 crc kubenswrapper[4760]: E0121 16:54:16.625794 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:27 crc kubenswrapper[4760]: I0121 16:54:27.624253 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:27 crc kubenswrapper[4760]: E0121 16:54:27.625512 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:41 crc kubenswrapper[4760]: I0121 16:54:41.622970 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:41 crc kubenswrapper[4760]: E0121 16:54:41.623727 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:54:43 crc kubenswrapper[4760]: I0121 16:54:43.435064 4760 scope.go:117] "RemoveContainer" containerID="5eaca0dba6db04cd67c8ee124b7fbd94d884f4d6e1e4b9795017aa247328cdeb" Jan 21 16:54:56 crc kubenswrapper[4760]: I0121 16:54:56.622768 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:54:56 crc kubenswrapper[4760]: E0121 16:54:56.623589 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:10 crc kubenswrapper[4760]: I0121 16:55:10.623140 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:10 crc kubenswrapper[4760]: E0121 16:55:10.624260 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:23 crc kubenswrapper[4760]: I0121 16:55:23.623171 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:23 crc kubenswrapper[4760]: E0121 16:55:23.624028 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:34 crc kubenswrapper[4760]: I0121 16:55:34.623386 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:34 crc kubenswrapper[4760]: E0121 16:55:34.624145 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.844602 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:36 crc kubenswrapper[4760]: E0121 16:55:36.845469 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845493 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" Jan 21 16:55:36 crc kubenswrapper[4760]: E0121 16:55:36.845514 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-content" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845520 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-content" Jan 21 16:55:36 crc kubenswrapper[4760]: E0121 16:55:36.845532 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-utilities" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845538 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="extract-utilities" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.845791 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70163da-060a-47bf-b993-0f755a0e7018" containerName="registry-server" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.873382 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.873542 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.992512 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.992598 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:36 crc kubenswrapper[4760]: I0121 16:55:36.993129 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.095402 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.095462 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.095533 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.096117 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.096997 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.120107 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"redhat-operators-5tdlg\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.219241 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:37 crc kubenswrapper[4760]: I0121 16:55:37.475493 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.228766 4760 generic.go:334] "Generic (PLEG): container finished" podID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerID="084e73294b49322199ca650dbb85fefd5277c640fcdebdd7f1dc727f7dabc8ed" exitCode=0 Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.228808 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"084e73294b49322199ca650dbb85fefd5277c640fcdebdd7f1dc727f7dabc8ed"} Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.228860 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerStarted","Data":"5a39a60073c062648702f2f1516e9b0a7071c282da39a1ace576a9376ff69246"} Jan 21 16:55:38 crc kubenswrapper[4760]: I0121 16:55:38.231378 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 16:55:39 crc kubenswrapper[4760]: I0121 16:55:39.238811 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerStarted","Data":"eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1"} Jan 21 16:55:40 crc kubenswrapper[4760]: I0121 16:55:40.251838 4760 generic.go:334] "Generic (PLEG): container finished" podID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerID="eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1" exitCode=0 Jan 21 16:55:40 crc kubenswrapper[4760]: I0121 16:55:40.251911 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1"} Jan 21 16:55:41 crc kubenswrapper[4760]: I0121 16:55:41.264842 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerStarted","Data":"2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c"} Jan 21 16:55:41 crc kubenswrapper[4760]: I0121 16:55:41.288635 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tdlg" podStartSLOduration=2.825875546 podStartE2EDuration="5.288617633s" podCreationTimestamp="2026-01-21 16:55:36 +0000 UTC" firstStartedPulling="2026-01-21 16:55:38.231056868 +0000 UTC m=+4108.898826446" lastFinishedPulling="2026-01-21 16:55:40.693798955 +0000 UTC m=+4111.361568533" observedRunningTime="2026-01-21 16:55:41.285549376 +0000 UTC m=+4111.953318954" watchObservedRunningTime="2026-01-21 16:55:41.288617633 +0000 UTC m=+4111.956387211" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.262970 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.265764 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.271730 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.379633 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.380177 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.380349 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482148 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482202 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482260 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.482852 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.483029 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.845950 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"community-operators-fglqk\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:44 crc kubenswrapper[4760]: I0121 16:55:44.890303 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:55:45 crc kubenswrapper[4760]: W0121 16:55:45.186277 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d7ddff_d735_4be4_afd0_36eadae98c6b.slice/crio-c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c WatchSource:0}: Error finding container c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c: Status 404 returned error can't find the container with id c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c Jan 21 16:55:45 crc kubenswrapper[4760]: I0121 16:55:45.203104 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:55:45 crc kubenswrapper[4760]: I0121 16:55:45.307678 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerStarted","Data":"c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c"} Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.220117 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.220484 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.269857 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.335521 4760 generic.go:334] "Generic (PLEG): container finished" podID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" exitCode=0 Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.335641 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9"} Jan 21 16:55:47 crc kubenswrapper[4760]: I0121 16:55:47.389201 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.358825 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerStarted","Data":"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff"} Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.409111 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.409376 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tdlg" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" containerID="cri-o://2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c" gracePeriod=2 Jan 21 16:55:49 crc kubenswrapper[4760]: I0121 16:55:49.662580 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:55:49 crc kubenswrapper[4760]: E0121 16:55:49.662854 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:55:50 crc kubenswrapper[4760]: I0121 16:55:50.367652 4760 generic.go:334] "Generic (PLEG): container finished" podID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" exitCode=0 Jan 21 16:55:50 crc kubenswrapper[4760]: I0121 16:55:50.367681 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff"} Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.797513 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.908976 4760 generic.go:334] "Generic (PLEG): container finished" podID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerID="2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c" exitCode=0 Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.909037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c"} Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.909077 4760 scope.go:117] "RemoveContainer" containerID="2389dbdcf17596bb986bb9bee4ce218cbea0d8f7a286ff9f954b8e0f4600de1c" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.931606 4760 scope.go:117] "RemoveContainer" containerID="eea941d28d7b591d6eb3fbc5fffe826337b732655738b2198b929afccab2b1c1" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.972929 4760 scope.go:117] "RemoveContainer" containerID="084e73294b49322199ca650dbb85fefd5277c640fcdebdd7f1dc727f7dabc8ed" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.992112 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") pod \"84c73ca7-9f22-4a7f-925f-e0a881d16663\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.992383 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") pod \"84c73ca7-9f22-4a7f-925f-e0a881d16663\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.992411 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") pod \"84c73ca7-9f22-4a7f-925f-e0a881d16663\" (UID: \"84c73ca7-9f22-4a7f-925f-e0a881d16663\") " Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.993366 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities" (OuterVolumeSpecName: "utilities") pod "84c73ca7-9f22-4a7f-925f-e0a881d16663" (UID: "84c73ca7-9f22-4a7f-925f-e0a881d16663"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:55:53 crc kubenswrapper[4760]: I0121 16:55:53.998669 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x" (OuterVolumeSpecName: "kube-api-access-rxz7x") pod "84c73ca7-9f22-4a7f-925f-e0a881d16663" (UID: "84c73ca7-9f22-4a7f-925f-e0a881d16663"). InnerVolumeSpecName "kube-api-access-rxz7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.107432 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxz7x\" (UniqueName: \"kubernetes.io/projected/84c73ca7-9f22-4a7f-925f-e0a881d16663-kube-api-access-rxz7x\") on node \"crc\" DevicePath \"\"" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.107477 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.128711 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84c73ca7-9f22-4a7f-925f-e0a881d16663" (UID: "84c73ca7-9f22-4a7f-925f-e0a881d16663"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.210023 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84c73ca7-9f22-4a7f-925f-e0a881d16663-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.928027 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerStarted","Data":"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba"} Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.930901 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tdlg" event={"ID":"84c73ca7-9f22-4a7f-925f-e0a881d16663","Type":"ContainerDied","Data":"5a39a60073c062648702f2f1516e9b0a7071c282da39a1ace576a9376ff69246"} Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.930915 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tdlg" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.952052 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fglqk" podStartSLOduration=4.71449858 podStartE2EDuration="10.952032423s" podCreationTimestamp="2026-01-21 16:55:44 +0000 UTC" firstStartedPulling="2026-01-21 16:55:47.339111553 +0000 UTC m=+4118.006881131" lastFinishedPulling="2026-01-21 16:55:53.576645396 +0000 UTC m=+4124.244414974" observedRunningTime="2026-01-21 16:55:54.949464269 +0000 UTC m=+4125.617233847" watchObservedRunningTime="2026-01-21 16:55:54.952032423 +0000 UTC m=+4125.619802001" Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.979980 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:54 crc kubenswrapper[4760]: I0121 16:55:54.988899 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tdlg"] Jan 21 16:55:55 crc kubenswrapper[4760]: I0121 16:55:55.631759 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" path="/var/lib/kubelet/pods/84c73ca7-9f22-4a7f-925f-e0a881d16663/volumes" Jan 21 16:56:01 crc kubenswrapper[4760]: I0121 16:56:01.628153 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:56:01 crc kubenswrapper[4760]: E0121 16:56:01.628929 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:56:04 crc kubenswrapper[4760]: I0121 16:56:04.891115 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:04 crc kubenswrapper[4760]: I0121 16:56:04.891417 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:04 crc kubenswrapper[4760]: I0121 16:56:04.939018 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:05 crc kubenswrapper[4760]: I0121 16:56:05.064790 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:06 crc kubenswrapper[4760]: I0121 16:56:06.176614 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.037592 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fglqk" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" containerID="cri-o://1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" gracePeriod=2 Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.478858 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.669534 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") pod \"76d7ddff-d735-4be4-afd0-36eadae98c6b\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.669624 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") pod \"76d7ddff-d735-4be4-afd0-36eadae98c6b\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.669718 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") pod \"76d7ddff-d735-4be4-afd0-36eadae98c6b\" (UID: \"76d7ddff-d735-4be4-afd0-36eadae98c6b\") " Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.670904 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities" (OuterVolumeSpecName: "utilities") pod "76d7ddff-d735-4be4-afd0-36eadae98c6b" (UID: "76d7ddff-d735-4be4-afd0-36eadae98c6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.675481 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f" (OuterVolumeSpecName: "kube-api-access-csb6f") pod "76d7ddff-d735-4be4-afd0-36eadae98c6b" (UID: "76d7ddff-d735-4be4-afd0-36eadae98c6b"). InnerVolumeSpecName "kube-api-access-csb6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.728119 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76d7ddff-d735-4be4-afd0-36eadae98c6b" (UID: "76d7ddff-d735-4be4-afd0-36eadae98c6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.772973 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.773028 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csb6f\" (UniqueName: \"kubernetes.io/projected/76d7ddff-d735-4be4-afd0-36eadae98c6b-kube-api-access-csb6f\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:07 crc kubenswrapper[4760]: I0121 16:56:07.773101 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76d7ddff-d735-4be4-afd0-36eadae98c6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047174 4760 generic.go:334] "Generic (PLEG): container finished" podID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" exitCode=0 Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047218 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba"} Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047244 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fglqk" event={"ID":"76d7ddff-d735-4be4-afd0-36eadae98c6b","Type":"ContainerDied","Data":"c4cbe16f6310c55c8215fd75046e9ccd3af1fe14461ddf096bf909302ebbb42c"} Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047296 4760 scope.go:117] "RemoveContainer" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.047455 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fglqk" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.083926 4760 scope.go:117] "RemoveContainer" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.086586 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.095692 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fglqk"] Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.472602 4760 scope.go:117] "RemoveContainer" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.505138 4760 scope.go:117] "RemoveContainer" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" Jan 21 16:56:08 crc kubenswrapper[4760]: E0121 16:56:08.505648 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba\": container with ID starting with 1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba not found: ID does not exist" containerID="1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.505708 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba"} err="failed to get container status \"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba\": rpc error: code = NotFound desc = could not find container \"1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba\": container with ID starting with 1c8741ca35bf6e89b1a972e6b4b903e09a7853285e60487a239a708d16dff6ba not found: ID does not exist" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.505747 4760 scope.go:117] "RemoveContainer" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" Jan 21 16:56:08 crc kubenswrapper[4760]: E0121 16:56:08.506060 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff\": container with ID starting with 11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff not found: ID does not exist" containerID="11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.506097 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff"} err="failed to get container status \"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff\": rpc error: code = NotFound desc = could not find container \"11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff\": container with ID starting with 11b021f3514c57ebe234847834d44fec30b5cb7d15f2f882b975ab258e1098ff not found: ID does not exist" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.506117 4760 scope.go:117] "RemoveContainer" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" Jan 21 16:56:08 crc kubenswrapper[4760]: E0121 16:56:08.506399 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9\": container with ID starting with d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9 not found: ID does not exist" containerID="d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9" Jan 21 16:56:08 crc kubenswrapper[4760]: I0121 16:56:08.506433 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9"} err="failed to get container status \"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9\": rpc error: code = NotFound desc = could not find container \"d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9\": container with ID starting with d8a0a510b6d1f90fe295622f555eb8a91aada23145bb38eb35dd8bdb6027ffc9 not found: ID does not exist" Jan 21 16:56:09 crc kubenswrapper[4760]: I0121 16:56:09.638546 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" path="/var/lib/kubelet/pods/76d7ddff-d735-4be4-afd0-36eadae98c6b/volumes" Jan 21 16:56:12 crc kubenswrapper[4760]: I0121 16:56:12.625002 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:56:12 crc kubenswrapper[4760]: E0121 16:56:12.627866 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 16:56:26 crc kubenswrapper[4760]: I0121 16:56:26.626436 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:56:27 crc kubenswrapper[4760]: I0121 16:56:27.208409 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e"} Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.150112 4760 generic.go:334] "Generic (PLEG): container finished" podID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerID="dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82" exitCode=0 Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.150677 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" event={"ID":"42fc2543-8bf4-4b71-8196-e19f701ed2f8","Type":"ContainerDied","Data":"dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82"} Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.151220 4760 scope.go:117] "RemoveContainer" containerID="dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82" Jan 21 16:58:04 crc kubenswrapper[4760]: I0121 16:58:04.236228 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7hmn_must-gather-xv7ts_42fc2543-8bf4-4b71-8196-e19f701ed2f8/gather/0.log" Jan 21 16:58:12 crc kubenswrapper[4760]: I0121 16:58:12.795382 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:58:12 crc kubenswrapper[4760]: I0121 16:58:12.795993 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" containerID="cri-o://763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a" gracePeriod=2 Jan 21 16:58:12 crc kubenswrapper[4760]: I0121 16:58:12.804500 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x7hmn/must-gather-xv7ts"] Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.239139 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7hmn_must-gather-xv7ts_42fc2543-8bf4-4b71-8196-e19f701ed2f8/copy/0.log" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.239749 4760 generic.go:334] "Generic (PLEG): container finished" podID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerID="763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a" exitCode=143 Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.239801 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064d00ee4b9b29489608e81fac44ff42dcc3ee165feb8b354f00902d275ae880" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.331160 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x7hmn_must-gather-xv7ts_42fc2543-8bf4-4b71-8196-e19f701ed2f8/copy/0.log" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.331717 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.432085 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") pod \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.432154 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") pod \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\" (UID: \"42fc2543-8bf4-4b71-8196-e19f701ed2f8\") " Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.441205 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4" (OuterVolumeSpecName: "kube-api-access-z7dv4") pod "42fc2543-8bf4-4b71-8196-e19f701ed2f8" (UID: "42fc2543-8bf4-4b71-8196-e19f701ed2f8"). InnerVolumeSpecName "kube-api-access-z7dv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.542727 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7dv4\" (UniqueName: \"kubernetes.io/projected/42fc2543-8bf4-4b71-8196-e19f701ed2f8-kube-api-access-z7dv4\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.612612 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "42fc2543-8bf4-4b71-8196-e19f701ed2f8" (UID: "42fc2543-8bf4-4b71-8196-e19f701ed2f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.634252 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" path="/var/lib/kubelet/pods/42fc2543-8bf4-4b71-8196-e19f701ed2f8/volumes" Jan 21 16:58:13 crc kubenswrapper[4760]: I0121 16:58:13.644862 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/42fc2543-8bf4-4b71-8196-e19f701ed2f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 16:58:14 crc kubenswrapper[4760]: I0121 16:58:14.246539 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x7hmn/must-gather-xv7ts" Jan 21 16:58:43 crc kubenswrapper[4760]: I0121 16:58:43.884628 4760 scope.go:117] "RemoveContainer" containerID="763403362ba10a30ac17a883a3a403546ca4509b0bbd6ee773a9a88bd12fef3a" Jan 21 16:58:43 crc kubenswrapper[4760]: I0121 16:58:43.909022 4760 scope.go:117] "RemoveContainer" containerID="dd458370f6a3866b76583e108e35d0b33cb0229df427b03f3d63a0584818be82" Jan 21 16:58:50 crc kubenswrapper[4760]: I0121 16:58:50.946351 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:58:50 crc kubenswrapper[4760]: I0121 16:58:50.947640 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:20 crc kubenswrapper[4760]: I0121 16:59:20.946233 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:20 crc kubenswrapper[4760]: I0121 16:59:20.948057 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.120420 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121416 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121431 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121442 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121449 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121463 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121470 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121487 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121494 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121508 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121515 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-content" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121527 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121533 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="extract-utilities" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121544 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121550 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" Jan 21 16:59:49 crc kubenswrapper[4760]: E0121 16:59:49.121564 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="gather" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121570 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="gather" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121754 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="copy" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121763 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d7ddff-d735-4be4-afd0-36eadae98c6b" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121781 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c73ca7-9f22-4a7f-925f-e0a881d16663" containerName="registry-server" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.121794 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fc2543-8bf4-4b71-8196-e19f701ed2f8" containerName="gather" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.122758 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.126427 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-n76vm"/"default-dockercfg-crvx4" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.126767 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76vm"/"kube-root-ca.crt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.131519 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n76vm"/"openshift-service-ca.crt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.188267 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.324377 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.324494 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.426708 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.426786 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.427285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.450483 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"must-gather-hpmlt\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:49 crc kubenswrapper[4760]: I0121 16:59:49.746616 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.252929 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.946193 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.946884 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.947246 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.948392 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 16:59:50 crc kubenswrapper[4760]: I0121 16:59:50.948566 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e" gracePeriod=600 Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.096084 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerStarted","Data":"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.096139 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerStarted","Data":"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.096155 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerStarted","Data":"7e51374153a0dd02502c820ce8ad68fda42a2a159d7ef4c259a8fc9a0164670f"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.099368 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e" exitCode=0 Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.099420 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e"} Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.099461 4760 scope.go:117] "RemoveContainer" containerID="0010323ae814d56748b8c116d8ea055338dfa4efaced2a20eca64aacefee8a83" Jan 21 16:59:51 crc kubenswrapper[4760]: I0121 16:59:51.124279 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n76vm/must-gather-hpmlt" podStartSLOduration=2.12425546 podStartE2EDuration="2.12425546s" podCreationTimestamp="2026-01-21 16:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:59:51.114474157 +0000 UTC m=+4361.782243735" watchObservedRunningTime="2026-01-21 16:59:51.12425546 +0000 UTC m=+4361.792025038" Jan 21 16:59:52 crc kubenswrapper[4760]: I0121 16:59:52.110958 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45"} Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.320653 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/crc-debug-9r66k"] Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.322515 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.401520 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.401575 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.503687 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.503755 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.503925 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:54 crc kubenswrapper[4760]: I0121 16:59:54.945310 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"crc-debug-9r66k\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:55 crc kubenswrapper[4760]: I0121 16:59:55.242605 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 16:59:55 crc kubenswrapper[4760]: W0121 16:59:55.282599 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c66b399_243c_4c7e_95d8_ea8d8e3f137e.slice/crio-94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9 WatchSource:0}: Error finding container 94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9: Status 404 returned error can't find the container with id 94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9 Jan 21 16:59:56 crc kubenswrapper[4760]: I0121 16:59:56.148401 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-9r66k" event={"ID":"4c66b399-243c-4c7e-95d8-ea8d8e3f137e","Type":"ContainerStarted","Data":"e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c"} Jan 21 16:59:56 crc kubenswrapper[4760]: I0121 16:59:56.148912 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-9r66k" event={"ID":"4c66b399-243c-4c7e-95d8-ea8d8e3f137e","Type":"ContainerStarted","Data":"94670ce315e15c452d7e6e13e4697fbdb97f167550b09878a461868848cd3bf9"} Jan 21 16:59:56 crc kubenswrapper[4760]: I0121 16:59:56.169219 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n76vm/crc-debug-9r66k" podStartSLOduration=2.169198475 podStartE2EDuration="2.169198475s" podCreationTimestamp="2026-01-21 16:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 16:59:56.164131919 +0000 UTC m=+4366.831901517" watchObservedRunningTime="2026-01-21 16:59:56.169198475 +0000 UTC m=+4366.836968053" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.431749 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.442690 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5c89c5dbb6-sspr9_78418f27-9273-42a4-aaa2-74edfcd10ef1/barbican-api/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.474780 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.481787 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86579fc786-9vmn6_6283023b-6e8b-4d25-b8e9-c0d91b08a913/barbican-keystone-listener/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.501819 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.509182 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-757cdb9855-pfpj6_470850c9-a1ed-4ea2-b7f1-b3bc6745b6ed/barbican-worker/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.550230 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-8kk4f_f4ba3e4f-146a-4af6-885a-877760c90ce0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.586269 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-central-agent/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.609434 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/ceilometer-notification-agent/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.616697 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/sg-core/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.633793 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c3a59982-94c8-461f-99f6-8154ca0666c2/proxy-httpd/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.651570 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api-log/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.721969 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f57ee425-0d4d-41f7-bf99-4ab4e87ead78/cinder-api/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.795748 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/cinder-scheduler/0.log" Jan 21 16:59:57 crc kubenswrapper[4760]: I0121 16:59:57.832542 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_98fdd45e-ce0f-464e-9ac9-a61c03e0eea5/probe/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.054655 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-c7vns_8adc5733-eeac-4148-878a-61b908f0a85b/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.078129 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2v6sq_cd8384f1-8b63-421a-b279-ae67ba25c2d2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.146802 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/dnsmasq-dns/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.152613 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-9nlpp_2be85016-adb8-42d1-8b8b-90d92e06edec/init/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.178379 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sd829_2ffc46c3-eeae-4b68-bede-4c1e5af6fe46/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.194386 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-log/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.215008 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_468d7d17-9181-4f39-851d-3acff337e10c/glance-httpd/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.238274 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-log/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.268616 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5d78a94b-d39f-4654-936e-8a39369b2082/glance-httpd/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.656360 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon-log/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.811468 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.833781 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c9896dc76-gwrzv_0e7e96ce-a64f-4a21-97e1-b2ebabc7e236/horizon/1.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.850918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ssf7l_81b15839-b904-442b-bd7a-f42a043a7be6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:58 crc kubenswrapper[4760]: I0121 16:59:58.880503 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lcwmb_d89a08a9-deb3-4c27-ab2e-4fab854717cc/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 16:59:59 crc kubenswrapper[4760]: I0121 16:59:59.116035 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5b497869f9-hs8kf_42613e5a-e22d-4358-8cd2-1ebfd1a42b55/keystone-api/0.log" Jan 21 16:59:59 crc kubenswrapper[4760]: I0121 16:59:59.132704 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f0d87473-0ca7-46b5-a57f-611e3014ab77/kube-state-metrics/0.log" Jan 21 16:59:59 crc kubenswrapper[4760]: I0121 16:59:59.171932 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6f958_60b03623-4db5-445f-89b4-61f39ac04dc2/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.207590 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm"] Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.208925 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.218001 4760 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.218250 4760 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.218558 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm"] Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.296624 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.296709 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.296984 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.398488 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.398878 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.399019 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.402949 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.407777 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.421393 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"collect-profiles-29483580-rvthm\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:00 crc kubenswrapper[4760]: I0121 17:00:00.539264 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:01 crc kubenswrapper[4760]: I0121 17:00:01.117529 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm"] Jan 21 17:00:01 crc kubenswrapper[4760]: I0121 17:00:01.202419 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" event={"ID":"fc25180f-86de-436c-a899-bbcd131c2e4d","Type":"ContainerStarted","Data":"34bc9e6d113969a805f89ba12cefb4fcc81c0cba559bd5d7d770242dc2e0404f"} Jan 21 17:00:02 crc kubenswrapper[4760]: I0121 17:00:02.217279 4760 generic.go:334] "Generic (PLEG): container finished" podID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerID="0eb96898d3ef5b287609b2c9830c3909227fc93f2f1f6fbaf301807adb2f8131" exitCode=0 Jan 21 17:00:02 crc kubenswrapper[4760]: I0121 17:00:02.217348 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" event={"ID":"fc25180f-86de-436c-a899-bbcd131c2e4d","Type":"ContainerDied","Data":"0eb96898d3ef5b287609b2c9830c3909227fc93f2f1f6fbaf301807adb2f8131"} Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.686534 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.799111 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") pod \"fc25180f-86de-436c-a899-bbcd131c2e4d\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.799995 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "fc25180f-86de-436c-a899-bbcd131c2e4d" (UID: "fc25180f-86de-436c-a899-bbcd131c2e4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.799523 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") pod \"fc25180f-86de-436c-a899-bbcd131c2e4d\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.800645 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") pod \"fc25180f-86de-436c-a899-bbcd131c2e4d\" (UID: \"fc25180f-86de-436c-a899-bbcd131c2e4d\") " Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.801538 4760 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc25180f-86de-436c-a899-bbcd131c2e4d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.841620 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fc25180f-86de-436c-a899-bbcd131c2e4d" (UID: "fc25180f-86de-436c-a899-bbcd131c2e4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:00:03 crc kubenswrapper[4760]: I0121 17:00:03.902764 4760 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fc25180f-86de-436c-a899-bbcd131c2e4d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.236383 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" event={"ID":"fc25180f-86de-436c-a899-bbcd131c2e4d","Type":"ContainerDied","Data":"34bc9e6d113969a805f89ba12cefb4fcc81c0cba559bd5d7d770242dc2e0404f"} Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.236427 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34bc9e6d113969a805f89ba12cefb4fcc81c0cba559bd5d7d770242dc2e0404f" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.236494 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483580-rvthm" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.346260 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp" (OuterVolumeSpecName: "kube-api-access-sx7zp") pod "fc25180f-86de-436c-a899-bbcd131c2e4d" (UID: "fc25180f-86de-436c-a899-bbcd131c2e4d"). InnerVolumeSpecName "kube-api-access-sx7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.423196 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx7zp\" (UniqueName: \"kubernetes.io/projected/fc25180f-86de-436c-a899-bbcd131c2e4d-kube-api-access-sx7zp\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.763764 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 17:00:04 crc kubenswrapper[4760]: I0121 17:00:04.781796 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483535-65sc9"] Jan 21 17:00:05 crc kubenswrapper[4760]: I0121 17:00:05.638689 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751cfeab-2105-46b2-93bd-d5b7b09c8ee4" path="/var/lib/kubelet/pods/751cfeab-2105-46b2-93bd-d5b7b09c8ee4/volumes" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.693715 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:08 crc kubenswrapper[4760]: E0121 17:00:08.694751 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerName="collect-profiles" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.694776 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerName="collect-profiles" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.695014 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc25180f-86de-436c-a899-bbcd131c2e4d" containerName="collect-profiles" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.696687 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.719156 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.820477 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.820554 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.820660 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.928561 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.928982 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.929067 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.929235 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.930004 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:08 crc kubenswrapper[4760]: I0121 17:00:08.957603 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"redhat-marketplace-7x8t2\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:09 crc kubenswrapper[4760]: I0121 17:00:09.036662 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:09 crc kubenswrapper[4760]: I0121 17:00:09.513595 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:10 crc kubenswrapper[4760]: I0121 17:00:10.476116 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" exitCode=0 Jan 21 17:00:10 crc kubenswrapper[4760]: I0121 17:00:10.476512 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26"} Jan 21 17:00:10 crc kubenswrapper[4760]: I0121 17:00:10.476547 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerStarted","Data":"7e465b5b58d8feb238e9bbcd9f1b3a3341c3f80c3d597e42348afaa7ca5809d6"} Jan 21 17:00:12 crc kubenswrapper[4760]: I0121 17:00:12.495227 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerStarted","Data":"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d"} Jan 21 17:00:13 crc kubenswrapper[4760]: I0121 17:00:13.507417 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" exitCode=0 Jan 21 17:00:13 crc kubenswrapper[4760]: I0121 17:00:13.507587 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d"} Jan 21 17:00:14 crc kubenswrapper[4760]: I0121 17:00:14.518935 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerStarted","Data":"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25"} Jan 21 17:00:14 crc kubenswrapper[4760]: I0121 17:00:14.544736 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7x8t2" podStartSLOduration=3.059657019 podStartE2EDuration="6.544715728s" podCreationTimestamp="2026-01-21 17:00:08 +0000 UTC" firstStartedPulling="2026-01-21 17:00:10.478014169 +0000 UTC m=+4381.145783747" lastFinishedPulling="2026-01-21 17:00:13.963072868 +0000 UTC m=+4384.630842456" observedRunningTime="2026-01-21 17:00:14.540659097 +0000 UTC m=+4385.208428675" watchObservedRunningTime="2026-01-21 17:00:14.544715728 +0000 UTC m=+4385.212485316" Jan 21 17:00:18 crc kubenswrapper[4760]: I0121 17:00:18.612765 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 17:00:18 crc kubenswrapper[4760]: I0121 17:00:18.630927 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 17:00:18 crc kubenswrapper[4760]: I0121 17:00:18.672270 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.037865 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.039448 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.497799 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.619746 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:19 crc kubenswrapper[4760]: I0121 17:00:19.748347 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.925510 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.936609 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.944051 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.955778 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.969192 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.979792 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 17:00:20 crc kubenswrapper[4760]: I0121 17:00:20.989993 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.002708 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.018174 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.053953 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.088110 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.596712 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.600786 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7x8t2" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" containerID="cri-o://03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" gracePeriod=2 Jan 21 17:00:21 crc kubenswrapper[4760]: I0121 17:00:21.604956 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.110243 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.208291 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") pod \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.208351 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") pod \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.208402 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") pod \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\" (UID: \"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc\") " Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.210582 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities" (OuterVolumeSpecName: "utilities") pod "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" (UID: "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.220306 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7" (OuterVolumeSpecName: "kube-api-access-w6qt7") pod "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" (UID: "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc"). InnerVolumeSpecName "kube-api-access-w6qt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.221158 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_06184570-059b-4132-a5b6-365e3e12e383/memcached/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.240701 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" (UID: "fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.309940 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6qt7\" (UniqueName: \"kubernetes.io/projected/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-kube-api-access-w6qt7\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.309967 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.309977 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.322298 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-api/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.376770 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6778d77f-gkzrk_42e45354-7553-43f2-af5a-613dd1a6dde9/neutron-httpd/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.403103 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cxnq2_93a8f498-bf0c-43f6-aad8-e26843ca3295/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.610866 4760 generic.go:334] "Generic (PLEG): container finished" podID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" exitCode=0 Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611165 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25"} Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611191 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7x8t2" event={"ID":"fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc","Type":"ContainerDied","Data":"7e465b5b58d8feb238e9bbcd9f1b3a3341c3f80c3d597e42348afaa7ca5809d6"} Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611208 4760 scope.go:117] "RemoveContainer" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.611643 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7x8t2" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.626821 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-log/0.log" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.647129 4760 scope.go:117] "RemoveContainer" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.670732 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.681500 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7x8t2"] Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.693593 4760 scope.go:117] "RemoveContainer" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.740478 4760 scope.go:117] "RemoveContainer" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" Jan 21 17:00:22 crc kubenswrapper[4760]: E0121 17:00:22.745586 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25\": container with ID starting with 03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25 not found: ID does not exist" containerID="03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.745622 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25"} err="failed to get container status \"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25\": rpc error: code = NotFound desc = could not find container \"03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25\": container with ID starting with 03263b39337a5467853289bfc45a8b94e4b5267ab8dd13d46daa1ba8e4521a25 not found: ID does not exist" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.745651 4760 scope.go:117] "RemoveContainer" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" Jan 21 17:00:22 crc kubenswrapper[4760]: E0121 17:00:22.746001 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d\": container with ID starting with 6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d not found: ID does not exist" containerID="6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.746027 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d"} err="failed to get container status \"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d\": rpc error: code = NotFound desc = could not find container \"6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d\": container with ID starting with 6ce6d888338db2273a31a64879713684c9964253ecc75e0c82fc4788120bea2d not found: ID does not exist" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.746042 4760 scope.go:117] "RemoveContainer" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" Jan 21 17:00:22 crc kubenswrapper[4760]: E0121 17:00:22.746373 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26\": container with ID starting with ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26 not found: ID does not exist" containerID="ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26" Jan 21 17:00:22 crc kubenswrapper[4760]: I0121 17:00:22.746403 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26"} err="failed to get container status \"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26\": rpc error: code = NotFound desc = could not find container \"ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26\": container with ID starting with ff413c74107626567527f3771b33c2cc1eda4ab2a3d5f7944afd7c6f8bb36e26 not found: ID does not exist" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.215748 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d5def02-0b1b-4b2e-b03c-028387759ced/nova-api-api/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.393207 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_56d015a2-9a67-4f44-a726-21949444f11b/nova-cell0-conductor-conductor/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.568379 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5bc3a5b4-ab7d-4215-bd61-ce6c206856ae/nova-cell1-conductor-conductor/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.634043 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" path="/var/lib/kubelet/pods/fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc/volumes" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.706376 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7a3e9e72-ecf6-406f-ab2b-02804c7f23e5/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.763873 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tqhjb_5a4de6cd-9a26-49b4-a3f7-eb743b8830b1/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:23 crc kubenswrapper[4760]: I0121 17:00:23.868682 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-log/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.335551 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ab3a95e8-224b-406c-b0ad-b184e8bec225/nova-metadata-metadata/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.519719 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_582a5834-a028-489f-943f-8928d5d9f26c/nova-scheduler-scheduler/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.547387 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/galera/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.564734 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d0612ab6-de5e-4f61-9e1c-97f8237c996c/mysql-bootstrap/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.590154 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/galera/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.599859 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29bd8985-5f22-46e9-9868-607bf9be273e/mysql-bootstrap/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.606440 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8e6f14c6-f759-439a-9ea1-63a88e650f89/openstackclient/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.618843 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ltr79_c17cd40e-6e7b-4c1e-9ca8-e6edc1248330/ovn-controller/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.627067 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-sz9bq_0dd6dce4-cb26-4c6d-bcc3-d3d24f26a2cc/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.638084 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.650156 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovs-vswitchd/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.658275 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jfrjn_1a0315f5-89b8-4589-b088-2ea2bb15e078/ovsdb-server-init/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.687071 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-pv9jf_fee344d1-5ba0-4b85-85bf-8133d451624e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.696233 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/ovn-northd/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.704608 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_50c45f6c-b35d-41f8-b358-afaf380d8f08/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.735388 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/ovsdbserver-nb/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.740587 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_47448c69-3198-48d8-8623-9a339a934aca/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.759756 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/ovsdbserver-sb/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.765987 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9ab8d081-832d-4e4c-92e6-94a97545613c/openstack-network-exporter/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.864753 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-log/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.952492 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-65c954fbbd-tb9kj_b3582d40-46db-4b7b-a7ca-12950184f371/placement-api/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.980779 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/rabbitmq/0.log" Jan 21 17:00:25 crc kubenswrapper[4760]: I0121 17:00:25.987021 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3751c728-a57c-483f-847a-b8765d807937/setup-container/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.013672 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/rabbitmq/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.020484 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bf6d5aab-531b-4b6b-94fc-1b386b6b7684/setup-container/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.037574 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r42kq_72a45862-35fa-4414-83d0-3e20bf784780/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.051540 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8jj42_07be8207-721d-4d0a-bada-ac8b6c54c3ce/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.076209 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-bphdg_c223d637-a759-4b7a-9eca-d4aa22707301/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.088485 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j5hkb_e0d57ee5-e43e-4edf-bbb1-1429b366bfac/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.104156 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2hb8p_28bf7889-c488-4d87-8b69-e477b27a7909/ssh-known-hosts-edpm-deployment/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.272899 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-httpd/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.293874 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7c9f777647-hfk58_92c42ad9-7fcc-46d4-a490-e43b5c6f6e5c/proxy-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.302833 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-vscfw_c41049e0-0ea2-4944-a23b-739987c73dce/swift-ring-rebalance/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.328128 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.363574 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-replicator/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.369511 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-auditor/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.374858 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/account-reaper/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.387648 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.431490 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-replicator/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.437312 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-auditor/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.447158 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/container-updater/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.457632 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-server/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.488357 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-replicator/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.510818 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-auditor/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.518912 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-updater/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.528804 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/object-expirer/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.535214 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/rsync/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.547373 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d1ccc2ed-d1e8-4b84-807d-55d70e8def12/swift-recon-cron/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.614882 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hblbg_bb09237a-f1eb-4d14-894f-ac460ce3b7c3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.646513 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_061a538a-0f39-44c0-9c33-e96701ced31e/tempest-tests-tempest-tests-runner/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.660166 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e410b884-0dde-488f-8d8b-b60494f285d5/test-operator-logs-container/0.log" Jan 21 17:00:26 crc kubenswrapper[4760]: I0121 17:00:26.682646 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-csfth_9b589bc2-f08a-4319-a56e-145673e19eee/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.516237 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.562543 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.574674 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.596985 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.607077 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.616232 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.694980 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.704916 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.733711 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 17:00:28 crc kubenswrapper[4760]: I0121 17:00:28.993228 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.004276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.072091 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.082272 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.147559 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.205289 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.302258 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.314508 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.330905 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 17:00:29 crc kubenswrapper[4760]: I0121 17:00:29.468424 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.696053 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.705459 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.760569 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.789804 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.807883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.832840 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.883982 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.892918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 17:00:30 crc kubenswrapper[4760]: I0121 17:00:30.904797 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 17:00:33 crc kubenswrapper[4760]: I0121 17:00:33.756280 4760 generic.go:334] "Generic (PLEG): container finished" podID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerID="e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c" exitCode=0 Jan 21 17:00:33 crc kubenswrapper[4760]: I0121 17:00:33.756357 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-9r66k" event={"ID":"4c66b399-243c-4c7e-95d8-ea8d8e3f137e","Type":"ContainerDied","Data":"e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c"} Jan 21 17:00:34 crc kubenswrapper[4760]: I0121 17:00:34.901457 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 17:00:34 crc kubenswrapper[4760]: I0121 17:00:34.938561 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-9r66k"] Jan 21 17:00:34 crc kubenswrapper[4760]: I0121 17:00:34.947056 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-9r66k"] Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.040418 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") pod \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.040551 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host" (OuterVolumeSpecName: "host") pod "4c66b399-243c-4c7e-95d8-ea8d8e3f137e" (UID: "4c66b399-243c-4c7e-95d8-ea8d8e3f137e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.040740 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") pod \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\" (UID: \"4c66b399-243c-4c7e-95d8-ea8d8e3f137e\") " Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.041206 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.048576 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7" (OuterVolumeSpecName: "kube-api-access-phkj7") pod "4c66b399-243c-4c7e-95d8-ea8d8e3f137e" (UID: "4c66b399-243c-4c7e-95d8-ea8d8e3f137e"). InnerVolumeSpecName "kube-api-access-phkj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.143486 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phkj7\" (UniqueName: \"kubernetes.io/projected/4c66b399-243c-4c7e-95d8-ea8d8e3f137e-kube-api-access-phkj7\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.634663 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" path="/var/lib/kubelet/pods/4c66b399-243c-4c7e-95d8-ea8d8e3f137e/volumes" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.777373 4760 scope.go:117] "RemoveContainer" containerID="e54eb16dc286959ca544ab71c5016b5dbf17032e048932e727fbc6332a098d2c" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.777477 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-9r66k" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.980488 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 17:00:35 crc kubenswrapper[4760]: I0121 17:00:35.997227 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.005019 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205310 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/crc-debug-lrtbz"] Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205768 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-content" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205784 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-content" Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205817 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerName="container-00" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205823 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerName="container-00" Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205836 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205842 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" Jan 21 17:00:36 crc kubenswrapper[4760]: E0121 17:00:36.205855 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-utilities" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.205860 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="extract-utilities" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.206058 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c66b399-243c-4c7e-95d8-ea8d8e3f137e" containerName="container-00" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.206080 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd50a9cc-33a2-4e5f-9ae3-0a3d0fa6f8fc" containerName="registry-server" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.206655 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.366559 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.366909 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.469514 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.469638 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.469664 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.500210 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"crc-debug-lrtbz\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.530387 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:36 crc kubenswrapper[4760]: I0121 17:00:36.786492 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" event={"ID":"67f592a6-29e0-475c-ba4d-93e6731324f1","Type":"ContainerStarted","Data":"4803a6e31e0e39aaaffe8ef8e7cdc4709e3a82be4bd28c07a628ee25496ef3c5"} Jan 21 17:00:37 crc kubenswrapper[4760]: I0121 17:00:37.796916 4760 generic.go:334] "Generic (PLEG): container finished" podID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerID="2dd528a43f784ce1134a2dbe812e4eee8a29e9b8d51d42b2c7d54a1b8c1fc965" exitCode=0 Jan 21 17:00:37 crc kubenswrapper[4760]: I0121 17:00:37.797028 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" event={"ID":"67f592a6-29e0-475c-ba4d-93e6731324f1","Type":"ContainerDied","Data":"2dd528a43f784ce1134a2dbe812e4eee8a29e9b8d51d42b2c7d54a1b8c1fc965"} Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.270282 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-lrtbz"] Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.278918 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-lrtbz"] Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.910393 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.931140 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") pod \"67f592a6-29e0-475c-ba4d-93e6731324f1\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.931516 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") pod \"67f592a6-29e0-475c-ba4d-93e6731324f1\" (UID: \"67f592a6-29e0-475c-ba4d-93e6731324f1\") " Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.931656 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host" (OuterVolumeSpecName: "host") pod "67f592a6-29e0-475c-ba4d-93e6731324f1" (UID: "67f592a6-29e0-475c-ba4d-93e6731324f1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.932412 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67f592a6-29e0-475c-ba4d-93e6731324f1-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:38 crc kubenswrapper[4760]: I0121 17:00:38.943570 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj" (OuterVolumeSpecName: "kube-api-access-zsxsj") pod "67f592a6-29e0-475c-ba4d-93e6731324f1" (UID: "67f592a6-29e0-475c-ba4d-93e6731324f1"). InnerVolumeSpecName "kube-api-access-zsxsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.034377 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsxsj\" (UniqueName: \"kubernetes.io/projected/67f592a6-29e0-475c-ba4d-93e6731324f1-kube-api-access-zsxsj\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.491872 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n76vm/crc-debug-kxxbh"] Jan 21 17:00:39 crc kubenswrapper[4760]: E0121 17:00:39.492612 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerName="container-00" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.492728 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerName="container-00" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.493052 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" containerName="container-00" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.493935 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.542693 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.542753 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.635805 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f592a6-29e0-475c-ba4d-93e6731324f1" path="/var/lib/kubelet/pods/67f592a6-29e0-475c-ba4d-93e6731324f1/volumes" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.644004 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.644439 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.645385 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.663966 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"crc-debug-kxxbh\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.813694 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.816689 4760 scope.go:117] "RemoveContainer" containerID="2dd528a43f784ce1134a2dbe812e4eee8a29e9b8d51d42b2c7d54a1b8c1fc965" Jan 21 17:00:39 crc kubenswrapper[4760]: I0121 17:00:39.816921 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-lrtbz" Jan 21 17:00:40 crc kubenswrapper[4760]: W0121 17:00:40.354143 4760 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd97fa959_1dbc_4384_8abd_95085c2901cf.slice/crio-c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da WatchSource:0}: Error finding container c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da: Status 404 returned error can't find the container with id c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.830976 4760 generic.go:334] "Generic (PLEG): container finished" podID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerID="7fc590519aad0afa156e804815135773e9289f8de79bf2217ae283fa181c7f3b" exitCode=0 Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.831164 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" event={"ID":"d97fa959-1dbc-4384-8abd-95085c2901cf","Type":"ContainerDied","Data":"7fc590519aad0afa156e804815135773e9289f8de79bf2217ae283fa181c7f3b"} Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.831305 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" event={"ID":"d97fa959-1dbc-4384-8abd-95085c2901cf","Type":"ContainerStarted","Data":"c8355066403ede4a95cc881c83e094e68bb0c1064817217462b1f56bc14a78da"} Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.866220 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-kxxbh"] Jan 21 17:00:40 crc kubenswrapper[4760]: I0121 17:00:40.873817 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/crc-debug-kxxbh"] Jan 21 17:00:41 crc kubenswrapper[4760]: I0121 17:00:41.956199 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.091918 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") pod \"d97fa959-1dbc-4384-8abd-95085c2901cf\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.092093 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") pod \"d97fa959-1dbc-4384-8abd-95085c2901cf\" (UID: \"d97fa959-1dbc-4384-8abd-95085c2901cf\") " Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.093815 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host" (OuterVolumeSpecName: "host") pod "d97fa959-1dbc-4384-8abd-95085c2901cf" (UID: "d97fa959-1dbc-4384-8abd-95085c2901cf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.098391 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp" (OuterVolumeSpecName: "kube-api-access-zmrsp") pod "d97fa959-1dbc-4384-8abd-95085c2901cf" (UID: "d97fa959-1dbc-4384-8abd-95085c2901cf"). InnerVolumeSpecName "kube-api-access-zmrsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.195271 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmrsp\" (UniqueName: \"kubernetes.io/projected/d97fa959-1dbc-4384-8abd-95085c2901cf-kube-api-access-zmrsp\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.195313 4760 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d97fa959-1dbc-4384-8abd-95085c2901cf-host\") on node \"crc\" DevicePath \"\"" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.850600 4760 scope.go:117] "RemoveContainer" containerID="7fc590519aad0afa156e804815135773e9289f8de79bf2217ae283fa181c7f3b" Jan 21 17:00:42 crc kubenswrapper[4760]: I0121 17:00:42.850651 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/crc-debug-kxxbh" Jan 21 17:00:43 crc kubenswrapper[4760]: I0121 17:00:43.639410 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" path="/var/lib/kubelet/pods/d97fa959-1dbc-4384-8abd-95085c2901cf/volumes" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.051857 4760 scope.go:117] "RemoveContainer" containerID="1240d6fd1cedd54b513b218e448ec1051d4c4912f66b7e655805dc838e90a14c" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.465902 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.482015 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 17:00:44 crc kubenswrapper[4760]: I0121 17:00:44.495611 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.424072 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.447274 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.458635 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.471077 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.485592 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 17:00:49 crc kubenswrapper[4760]: I0121 17:00:49.501771 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 17:00:59 crc kubenswrapper[4760]: I0121 17:00:59.942916 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 17:00:59 crc kubenswrapper[4760]: I0121 17:00:59.958022 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 17:00:59 crc kubenswrapper[4760]: I0121 17:00:59.983055 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.151020 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483581-zsndz"] Jan 21 17:01:00 crc kubenswrapper[4760]: E0121 17:01:00.152633 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerName="container-00" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.152661 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerName="container-00" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.152964 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97fa959-1dbc-4384-8abd-95085c2901cf" containerName="container-00" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.153807 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.163429 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-zsndz"] Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245543 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245583 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.245823 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347461 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347588 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347666 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.347684 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.445478 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.445541 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.445566 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.446515 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"keystone-cron-29483581-zsndz\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:00 crc kubenswrapper[4760]: I0121 17:01:00.526671 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.087398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483581-zsndz"] Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.742823 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.752988 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.758717 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.766205 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.771466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.780224 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.787786 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.796083 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.812468 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.843379 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 17:01:01 crc kubenswrapper[4760]: I0121 17:01:01.859109 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.023286 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerStarted","Data":"cb4808bc00e5d5af1c631e90799b11a0b2c53b063dd7b8ec5afc9d2037c4a0a3"} Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.023342 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerStarted","Data":"36ce05f9558b81be406159ffc4667499cf758f00c6861a631e9a6c0573a30b2e"} Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.090033 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483581-zsndz" podStartSLOduration=2.09001097 podStartE2EDuration="2.09001097s" podCreationTimestamp="2026-01-21 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 17:01:02.042246492 +0000 UTC m=+4432.710016070" watchObservedRunningTime="2026-01-21 17:01:02.09001097 +0000 UTC m=+4432.757780548" Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.261387 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 17:01:02 crc kubenswrapper[4760]: I0121 17:01:02.269883 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.049827 4760 generic.go:334] "Generic (PLEG): container finished" podID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerID="cb4808bc00e5d5af1c631e90799b11a0b2c53b063dd7b8ec5afc9d2037c4a0a3" exitCode=0 Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.049898 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerDied","Data":"cb4808bc00e5d5af1c631e90799b11a0b2c53b063dd7b8ec5afc9d2037c4a0a3"} Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.933277 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/extract/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.942462 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/util/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.952583 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcl9grl_11e5baeb-8bc7-4f75-bfcf-5128246fe0af/pull/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.969215 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/extract/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.981237 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/util/0.log" Jan 21 17:01:05 crc kubenswrapper[4760]: I0121 17:01:05.990495 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7134db8k_e3f1ab22-a6bd-4a89-9b50-38d3e2dab1a3/pull/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.567884 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.612356 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/registry-server/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.618302 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-utilities/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.629130 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r9xz4_3b7a88f7-910c-443d-8dbc-471879998d6a/extract-content/0.log" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.667570 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.668072 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.668137 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.668163 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") pod \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\" (UID: \"bfa27c46-a32c-4d8e-a23f-12219e0cba4f\") " Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.678248 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.678409 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp" (OuterVolumeSpecName: "kube-api-access-kvxtp") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "kube-api-access-kvxtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.712979 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.722140 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data" (OuterVolumeSpecName: "config-data") pod "bfa27c46-a32c-4d8e-a23f-12219e0cba4f" (UID: "bfa27c46-a32c-4d8e-a23f-12219e0cba4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770584 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvxtp\" (UniqueName: \"kubernetes.io/projected/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-kube-api-access-kvxtp\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770619 4760 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770630 4760 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:06 crc kubenswrapper[4760]: I0121 17:01:06.770639 4760 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa27c46-a32c-4d8e-a23f-12219e0cba4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.071642 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483581-zsndz" event={"ID":"bfa27c46-a32c-4d8e-a23f-12219e0cba4f","Type":"ContainerDied","Data":"36ce05f9558b81be406159ffc4667499cf758f00c6861a631e9a6c0573a30b2e"} Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.072004 4760 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ce05f9558b81be406159ffc4667499cf758f00c6861a631e9a6c0573a30b2e" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.072022 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483581-zsndz" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.292889 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/registry-server/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.298466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-utilities/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.306572 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-f6w64_eceda6b0-5176-4f10-83f7-2a652e48f206/extract-content/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.329172 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lhqrl_a848eafc-6251-4b18-94fd-dddb46db86ca/marketplace-operator/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.520418 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/registry-server/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.525425 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-utilities/0.log" Jan 21 17:01:07 crc kubenswrapper[4760]: I0121 17:01:07.532961 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dbfv2_f0782378-6389-4c4d-b387-3d2860fb524f/extract-content/0.log" Jan 21 17:01:08 crc kubenswrapper[4760]: I0121 17:01:08.141485 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/registry-server/0.log" Jan 21 17:01:08 crc kubenswrapper[4760]: I0121 17:01:08.147670 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-utilities/0.log" Jan 21 17:01:08 crc kubenswrapper[4760]: I0121 17:01:08.156605 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v4mmm_a74de4f6-26aa-473e-87a5-b4a2a30f0596/extract-content/0.log" Jan 21 17:01:51 crc kubenswrapper[4760]: I0121 17:01:51.744714 4760 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" containerName="galera" probeResult="failure" output="command timed out" Jan 21 17:01:51 crc kubenswrapper[4760]: I0121 17:01:51.744717 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="29bd8985-5f22-46e9-9868-607bf9be273e" containerName="galera" probeResult="failure" output="command timed out" Jan 21 17:02:20 crc kubenswrapper[4760]: I0121 17:02:20.946513 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:20 crc kubenswrapper[4760]: I0121 17:02:20.947124 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.314145 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/controller/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.320901 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-skl79_57bfd668-6e8b-475a-99b4-cdbd22c9c19f/kube-rbac-proxy/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.347992 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/controller/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.417647 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.446211 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 17:02:22 crc kubenswrapper[4760]: I0121 17:02:22.456519 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.711585 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.786095 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.801820 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.818826 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.839640 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.848474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.852781 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.860631 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/reloader/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.866847 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/frr-metrics/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.881711 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.891257 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/kube-rbac-proxy-frr/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.897145 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-frr-files/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.904307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-reloader/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.912676 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gsbq4_5f599753-8125-400e-b9dd-f94bee01fdf8/cp-metrics/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.930818 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-8cr5r_120c759b-d895-4898-a35a-2c7f74bb71b2/frr-k8s-webhook-server/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.947782 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.959463 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.965222 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c4667f969-l2pv4_18110c9f-5a23-4a4c-9b39-289c23ff6e1c/manager/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.983737 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-746c87857b-5gngc_280fc33b-ec55-41cd-92e4-17ed099904a0/webhook-server/0.log" Jan 21 17:02:23 crc kubenswrapper[4760]: I0121 17:02:23.984206 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.460208 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.481057 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.513437 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/speaker/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.521548 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-d6jcx_dbe6716c-6a30-454c-979c-59566d2c29b6/kube-rbac-proxy/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.567512 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.581315 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.665046 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.716546 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.798871 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.808715 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.824388 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 17:02:24 crc kubenswrapper[4760]: I0121 17:02:24.968918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 17:02:25 crc kubenswrapper[4760]: I0121 17:02:25.630068 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-xn52x_2291564d-b6d1-4334-86b3-a41d012c6827/cert-manager-controller/0.log" Jan 21 17:02:25 crc kubenswrapper[4760]: I0121 17:02:25.653800 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-r7btf_f66dc60b-4a53-45ba-a0af-74d7ddd2d6b4/cert-manager-cainjector/0.log" Jan 21 17:02:25 crc kubenswrapper[4760]: I0121 17:02:25.672412 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rhdtg_9c2bdefb-6d75-4da7-89bb-160ec8b900da/cert-manager-webhook/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.065276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.101964 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.164317 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.204505 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.234058 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.270007 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.327276 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.339517 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.351568 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.712861 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dm455_0df700c2-3091-4770-b404-cc81bc416387/control-plane-machine-set-operator/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.730551 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/kube-rbac-proxy/0.log" Jan 21 17:02:26 crc kubenswrapper[4760]: I0121 17:02:26.739394 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4x9fq_3671d10c-81c6-4c7f-9117-1c237e4efe51/machine-api-operator/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.470175 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-nszmq_ebbdf3cf-f86a-471e-89d0-d2a43f8245f6/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.521355 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-zlfp7_6026e9ac-64d0-4386-bbd8-f0ac19960a22/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.531867 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-kc2f5_8bcbe073-fa37-480d-a74a-af4c8d6a449b/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.541455 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/extract/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.550391 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/util/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.557633 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4065b7f50d198460ca790557fad87d0859a27e69c8f7897a17ffcb37apw7ql_ab7a2391-a0e7-4576-a91a-bf31978dc7ad/pull/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.662973 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-z2bkt_bac59717-45dd-495a-8874-b4f29a8adc3f/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.674851 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-k92xb_97d1cdc7-8fc8-4e7b-b231-0cceadc61597/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.696921 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-wp6f6_1b969ec1-1858-44ff-92da-a071b9ff15ee/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.717391 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-gwfqw_b83e6b43-dd2e-439e-afb2-e168dcd42605/nmstate-console-plugin/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.737073 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-5b9fb_272d3255-cc65-43d6-89d6-37962ec071f1/nmstate-handler/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.746500 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/nmstate-metrics/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.755138 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k5n9g_497fc134-f9a5-47ff-80ba-2c702922274a/kube-rbac-proxy/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.769307 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-lrskp_f088d446-a779-4351-80aa-30d855335e4c/nmstate-operator/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.781055 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-v2hbl_80bcb070-867d-4d94-9f7b-73ff6c767a78/nmstate-webhook/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.969991 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-7trxk_a441beba-fca9-47d4-bf5b-1533929ea421/manager/0.log" Jan 21 17:02:27 crc kubenswrapper[4760]: I0121 17:02:27.980695 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-z7mkd_a28cddfd-04c6-4860-a5eb-c341f2b25009/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.055474 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-pp2ln_f3256ca9-a35f-4ae0-a56c-ac2eaf3bbdc3/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.067964 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-rjrtw_1530b88f-1192-4aa8-b9ba-82f23e37ea6a/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.108570 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-chvdr_80ad016c-9145-4e38-90f1-515a1fcd0fc7/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.155669 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-7vqlg_2ef1c912-1599-4799-8f4c-1c9cb20045ba/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.231107 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-xckkd_7e819adc-151b-456f-b41f-5101b03ab7b2/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.243497 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-566bc_0252011a-4dac-4cad-94b3-39a6cf9bcd42/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.259586 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549x7kt_28e62955-b747-4ca8-aa6b-d0678242596f/manager/0.log" Jan 21 17:02:28 crc kubenswrapper[4760]: I0121 17:02:28.406657 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5bb58d564b-c5ghx_5ef28c93-e9fc-4d47-b280-5372e4c7aaf7/operator/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.595142 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-867799c6f-wh9wg_4023c758-3567-4e32-97de-9501e117e965/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.616763 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7qqml_593c7623-4bb3-4d34-b7cf-b7bcaa5d292e/registry-server/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.679230 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ffq4x_daef61f2-122d-4414-b7df-24982387fa95/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.704370 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-lqgfs_75bcd345-56d6-4c12-9392-eea68c43dc30/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.725888 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vxwmq_a2806ede-c1d4-4571-8829-1b94cf7d1606/operator/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.762020 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-49prq_8d3c8a68-0896-4875-b6ff-d6f6fd2794b6/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.848918 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-m7zb2_b511b419-e589-4783-a6a8-6d6fee8decde/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.859573 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-cfsr6_813b8c35-22e2-41a4-9523-a6cf3cd99ab2/manager/0.log" Jan 21 17:02:29 crc kubenswrapper[4760]: I0121 17:02:29.870478 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-fkd2l_d8bbdcea-a920-4fb4-b434-2323a28d0ea7/manager/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.446506 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/kube-multus-additional-cni-plugins/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.455442 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/egress-router-binary-copy/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.462295 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/cni-plugins/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.473252 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/bond-cni-plugin/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.480666 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/routeoverride-cni/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.490612 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni-bincopy/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.499161 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lkblz_bd3c6c18-f174-4022-96c5-5892413c76fd/whereabouts-cni/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.528466 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/multus-admission-controller/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.535209 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-n6cjk_7ae6da0d-f707-4d3e-8625-cae54fe221d0/kube-rbac-proxy/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.595005 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/2.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.662591 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dx99k_7300c51f-415f-4696-bda1-a9e79ae5704a/kube-multus/3.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.707521 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/network-metrics-daemon/0.log" Jan 21 17:02:31 crc kubenswrapper[4760]: I0121 17:02:31.713263 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-bbr8l_0a4b6476-7a89-41b4-b918-5628f622c7c1/kube-rbac-proxy/0.log" Jan 21 17:02:50 crc kubenswrapper[4760]: I0121 17:02:50.954518 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:02:50 crc kubenswrapper[4760]: I0121 17:02:50.955315 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.945820 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.946253 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.946294 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.947039 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:03:20 crc kubenswrapper[4760]: I0121 17:03:20.947084 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" gracePeriod=600 Jan 21 17:03:21 crc kubenswrapper[4760]: E0121 17:03:21.097772 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.290569 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" exitCode=0 Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.290618 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45"} Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.290688 4760 scope.go:117] "RemoveContainer" containerID="6de7194e7847840eaef030fbacdefa560c3c693cba625d032ed94f16b72b9d9e" Jan 21 17:03:21 crc kubenswrapper[4760]: I0121 17:03:21.291441 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:21 crc kubenswrapper[4760]: E0121 17:03:21.291769 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:32 crc kubenswrapper[4760]: I0121 17:03:32.623944 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:32 crc kubenswrapper[4760]: E0121 17:03:32.624756 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:44 crc kubenswrapper[4760]: I0121 17:03:44.623458 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:44 crc kubenswrapper[4760]: E0121 17:03:44.624385 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:03:58 crc kubenswrapper[4760]: I0121 17:03:58.623187 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:03:58 crc kubenswrapper[4760]: E0121 17:03:58.623930 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:12 crc kubenswrapper[4760]: I0121 17:04:12.623462 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:12 crc kubenswrapper[4760]: E0121 17:04:12.624093 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.596812 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:17 crc kubenswrapper[4760]: E0121 17:04:17.597991 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerName="keystone-cron" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.598019 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerName="keystone-cron" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.598300 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa27c46-a32c-4d8e-a23f-12219e0cba4f" containerName="keystone-cron" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.600098 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.616153 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.640236 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.640350 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.640653 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.743132 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.743194 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.743248 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.744435 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.744547 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.768205 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"certified-operators-vfn9b\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:17 crc kubenswrapper[4760]: I0121 17:04:17.924108 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.494593 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.844183 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ac16167-67c0-4b92-9132-d163c09388a5" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" exitCode=0 Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.844385 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48"} Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.844546 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerStarted","Data":"f5c0978cf59e60dafe4d2b632406ea068a43847fa3e3d06aa075f2c759cebcf8"} Jan 21 17:04:18 crc kubenswrapper[4760]: I0121 17:04:18.846013 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:04:19 crc kubenswrapper[4760]: I0121 17:04:19.858946 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerStarted","Data":"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba"} Jan 21 17:04:20 crc kubenswrapper[4760]: I0121 17:04:20.877894 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ac16167-67c0-4b92-9132-d163c09388a5" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" exitCode=0 Jan 21 17:04:20 crc kubenswrapper[4760]: I0121 17:04:20.878123 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba"} Jan 21 17:04:21 crc kubenswrapper[4760]: I0121 17:04:21.888110 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerStarted","Data":"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b"} Jan 21 17:04:21 crc kubenswrapper[4760]: I0121 17:04:21.915613 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vfn9b" podStartSLOduration=2.367597117 podStartE2EDuration="4.915591312s" podCreationTimestamp="2026-01-21 17:04:17 +0000 UTC" firstStartedPulling="2026-01-21 17:04:18.845707885 +0000 UTC m=+4629.513477473" lastFinishedPulling="2026-01-21 17:04:21.39370208 +0000 UTC m=+4632.061471668" observedRunningTime="2026-01-21 17:04:21.909171762 +0000 UTC m=+4632.576941350" watchObservedRunningTime="2026-01-21 17:04:21.915591312 +0000 UTC m=+4632.583360900" Jan 21 17:04:23 crc kubenswrapper[4760]: I0121 17:04:23.625536 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:23 crc kubenswrapper[4760]: E0121 17:04:23.625834 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:27 crc kubenswrapper[4760]: I0121 17:04:27.924676 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:27 crc kubenswrapper[4760]: I0121 17:04:27.926120 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:27 crc kubenswrapper[4760]: I0121 17:04:27.975282 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:29 crc kubenswrapper[4760]: I0121 17:04:29.021146 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:29 crc kubenswrapper[4760]: I0121 17:04:29.071174 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.004038 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vfn9b" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" containerID="cri-o://3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" gracePeriod=2 Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.473671 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.509138 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") pod \"9ac16167-67c0-4b92-9132-d163c09388a5\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.509471 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") pod \"9ac16167-67c0-4b92-9132-d163c09388a5\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.509656 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") pod \"9ac16167-67c0-4b92-9132-d163c09388a5\" (UID: \"9ac16167-67c0-4b92-9132-d163c09388a5\") " Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.512062 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities" (OuterVolumeSpecName: "utilities") pod "9ac16167-67c0-4b92-9132-d163c09388a5" (UID: "9ac16167-67c0-4b92-9132-d163c09388a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.521245 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8" (OuterVolumeSpecName: "kube-api-access-q48j8") pod "9ac16167-67c0-4b92-9132-d163c09388a5" (UID: "9ac16167-67c0-4b92-9132-d163c09388a5"). InnerVolumeSpecName "kube-api-access-q48j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.563902 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ac16167-67c0-4b92-9132-d163c09388a5" (UID: "9ac16167-67c0-4b92-9132-d163c09388a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.612152 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.612517 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ac16167-67c0-4b92-9132-d163c09388a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:31 crc kubenswrapper[4760]: I0121 17:04:31.612909 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q48j8\" (UniqueName: \"kubernetes.io/projected/9ac16167-67c0-4b92-9132-d163c09388a5-kube-api-access-q48j8\") on node \"crc\" DevicePath \"\"" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.016539 4760 generic.go:334] "Generic (PLEG): container finished" podID="9ac16167-67c0-4b92-9132-d163c09388a5" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" exitCode=0 Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.016620 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b"} Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.016657 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfn9b" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.017670 4760 scope.go:117] "RemoveContainer" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.017653 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfn9b" event={"ID":"9ac16167-67c0-4b92-9132-d163c09388a5","Type":"ContainerDied","Data":"f5c0978cf59e60dafe4d2b632406ea068a43847fa3e3d06aa075f2c759cebcf8"} Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.054659 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.061190 4760 scope.go:117] "RemoveContainer" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.063816 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vfn9b"] Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.081310 4760 scope.go:117] "RemoveContainer" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.118787 4760 scope.go:117] "RemoveContainer" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" Jan 21 17:04:32 crc kubenswrapper[4760]: E0121 17:04:32.119142 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b\": container with ID starting with 3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b not found: ID does not exist" containerID="3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119180 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b"} err="failed to get container status \"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b\": rpc error: code = NotFound desc = could not find container \"3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b\": container with ID starting with 3c782d2e89c71df4f1759fdee6663657c848d9244e08750b622d22ca46508e0b not found: ID does not exist" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119207 4760 scope.go:117] "RemoveContainer" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" Jan 21 17:04:32 crc kubenswrapper[4760]: E0121 17:04:32.119517 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba\": container with ID starting with 1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba not found: ID does not exist" containerID="1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119540 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba"} err="failed to get container status \"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba\": rpc error: code = NotFound desc = could not find container \"1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba\": container with ID starting with 1cd54442a1cec95c755451dfea5838f0d379993163f3ad7b883b77e18d6607ba not found: ID does not exist" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.119559 4760 scope.go:117] "RemoveContainer" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" Jan 21 17:04:32 crc kubenswrapper[4760]: E0121 17:04:32.120116 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48\": container with ID starting with 2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48 not found: ID does not exist" containerID="2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48" Jan 21 17:04:32 crc kubenswrapper[4760]: I0121 17:04:32.120142 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48"} err="failed to get container status \"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48\": rpc error: code = NotFound desc = could not find container \"2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48\": container with ID starting with 2c5fc376b01305cb3f8a2760794b9fb295123166b6a1b29191c5a589861a1d48 not found: ID does not exist" Jan 21 17:04:33 crc kubenswrapper[4760]: I0121 17:04:33.635656 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" path="/var/lib/kubelet/pods/9ac16167-67c0-4b92-9132-d163c09388a5/volumes" Jan 21 17:04:37 crc kubenswrapper[4760]: I0121 17:04:37.623176 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:37 crc kubenswrapper[4760]: E0121 17:04:37.623828 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:04:51 crc kubenswrapper[4760]: I0121 17:04:51.651597 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:04:51 crc kubenswrapper[4760]: E0121 17:04:51.652460 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:06 crc kubenswrapper[4760]: I0121 17:05:06.622577 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:06 crc kubenswrapper[4760]: E0121 17:05:06.623311 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:21 crc kubenswrapper[4760]: I0121 17:05:21.623940 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:21 crc kubenswrapper[4760]: E0121 17:05:21.624776 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:35 crc kubenswrapper[4760]: I0121 17:05:35.622317 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:35 crc kubenswrapper[4760]: E0121 17:05:35.623068 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:05:47 crc kubenswrapper[4760]: I0121 17:05:47.623965 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:05:47 crc kubenswrapper[4760]: E0121 17:05:47.624890 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:02 crc kubenswrapper[4760]: I0121 17:06:02.622768 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:02 crc kubenswrapper[4760]: E0121 17:06:02.623588 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.261842 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:11 crc kubenswrapper[4760]: E0121 17:06:11.262778 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-content" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.262802 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-content" Jan 21 17:06:11 crc kubenswrapper[4760]: E0121 17:06:11.262818 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-utilities" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.262825 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="extract-utilities" Jan 21 17:06:11 crc kubenswrapper[4760]: E0121 17:06:11.262848 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.262854 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.263223 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac16167-67c0-4b92-9132-d163c09388a5" containerName="registry-server" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.265202 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.287313 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.361153 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.361214 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.361498 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.463657 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.463725 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.463744 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.464285 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.464286 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.500399 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"community-operators-6fgdz\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:11 crc kubenswrapper[4760]: I0121 17:06:11.589626 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.146132 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.959981 4760 generic.go:334] "Generic (PLEG): container finished" podID="3ea00829-01aa-4875-a7c8-93efd9232980" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" exitCode=0 Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.960037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9"} Jan 21 17:06:12 crc kubenswrapper[4760]: I0121 17:06:12.960069 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerStarted","Data":"4aece9ee363a0098ea3f38ad5eb35639c2e3466e353194d26463ad56ea1a52cb"} Jan 21 17:06:14 crc kubenswrapper[4760]: I0121 17:06:14.983380 4760 generic.go:334] "Generic (PLEG): container finished" podID="3ea00829-01aa-4875-a7c8-93efd9232980" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" exitCode=0 Jan 21 17:06:14 crc kubenswrapper[4760]: I0121 17:06:14.983924 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f"} Jan 21 17:06:16 crc kubenswrapper[4760]: I0121 17:06:16.622362 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:16 crc kubenswrapper[4760]: E0121 17:06:16.623115 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:17 crc kubenswrapper[4760]: I0121 17:06:17.004256 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerStarted","Data":"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482"} Jan 21 17:06:17 crc kubenswrapper[4760]: I0121 17:06:17.036650 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6fgdz" podStartSLOduration=2.983954784 podStartE2EDuration="6.036626892s" podCreationTimestamp="2026-01-21 17:06:11 +0000 UTC" firstStartedPulling="2026-01-21 17:06:12.961459141 +0000 UTC m=+4743.629228719" lastFinishedPulling="2026-01-21 17:06:16.014131249 +0000 UTC m=+4746.681900827" observedRunningTime="2026-01-21 17:06:17.027187647 +0000 UTC m=+4747.694957225" watchObservedRunningTime="2026-01-21 17:06:17.036626892 +0000 UTC m=+4747.704396460" Jan 21 17:06:21 crc kubenswrapper[4760]: I0121 17:06:21.590106 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:21 crc kubenswrapper[4760]: I0121 17:06:21.590902 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:21 crc kubenswrapper[4760]: I0121 17:06:21.639875 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:22 crc kubenswrapper[4760]: I0121 17:06:22.104682 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:22 crc kubenswrapper[4760]: I0121 17:06:22.156482 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.067210 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6fgdz" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" containerID="cri-o://7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" gracePeriod=2 Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.519436 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.626134 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") pod \"3ea00829-01aa-4875-a7c8-93efd9232980\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.626355 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") pod \"3ea00829-01aa-4875-a7c8-93efd9232980\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.626473 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") pod \"3ea00829-01aa-4875-a7c8-93efd9232980\" (UID: \"3ea00829-01aa-4875-a7c8-93efd9232980\") " Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.630752 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities" (OuterVolumeSpecName: "utilities") pod "3ea00829-01aa-4875-a7c8-93efd9232980" (UID: "3ea00829-01aa-4875-a7c8-93efd9232980"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.632559 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns" (OuterVolumeSpecName: "kube-api-access-g5fns") pod "3ea00829-01aa-4875-a7c8-93efd9232980" (UID: "3ea00829-01aa-4875-a7c8-93efd9232980"). InnerVolumeSpecName "kube-api-access-g5fns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.729072 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.729109 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5fns\" (UniqueName: \"kubernetes.io/projected/3ea00829-01aa-4875-a7c8-93efd9232980-kube-api-access-g5fns\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.886045 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ea00829-01aa-4875-a7c8-93efd9232980" (UID: "3ea00829-01aa-4875-a7c8-93efd9232980"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:06:24 crc kubenswrapper[4760]: I0121 17:06:24.932248 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea00829-01aa-4875-a7c8-93efd9232980-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076511 4760 generic.go:334] "Generic (PLEG): container finished" podID="3ea00829-01aa-4875-a7c8-93efd9232980" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" exitCode=0 Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076576 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482"} Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076601 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fgdz" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076644 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fgdz" event={"ID":"3ea00829-01aa-4875-a7c8-93efd9232980","Type":"ContainerDied","Data":"4aece9ee363a0098ea3f38ad5eb35639c2e3466e353194d26463ad56ea1a52cb"} Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.076668 4760 scope.go:117] "RemoveContainer" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.093630 4760 scope.go:117] "RemoveContainer" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.110783 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.120463 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6fgdz"] Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.356049 4760 scope.go:117] "RemoveContainer" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.509360 4760 scope.go:117] "RemoveContainer" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" Jan 21 17:06:25 crc kubenswrapper[4760]: E0121 17:06:25.509873 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482\": container with ID starting with 7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482 not found: ID does not exist" containerID="7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.509906 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482"} err="failed to get container status \"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482\": rpc error: code = NotFound desc = could not find container \"7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482\": container with ID starting with 7dc95af719268932391f9f63409fb3fabad6ccdaa23f2cb05d23b448ac79b482 not found: ID does not exist" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.509931 4760 scope.go:117] "RemoveContainer" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" Jan 21 17:06:25 crc kubenswrapper[4760]: E0121 17:06:25.510269 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f\": container with ID starting with 9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f not found: ID does not exist" containerID="9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.510318 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f"} err="failed to get container status \"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f\": rpc error: code = NotFound desc = could not find container \"9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f\": container with ID starting with 9477e6737c9636263ace24578e4ba11b69c0f6d0184b30faeacb2c0c00be848f not found: ID does not exist" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.510363 4760 scope.go:117] "RemoveContainer" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" Jan 21 17:06:25 crc kubenswrapper[4760]: E0121 17:06:25.510707 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9\": container with ID starting with 809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9 not found: ID does not exist" containerID="809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.510741 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9"} err="failed to get container status \"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9\": rpc error: code = NotFound desc = could not find container \"809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9\": container with ID starting with 809593ce733c84b43a746c7307757668a4f1530542c98bd836ec16e030dbddd9 not found: ID does not exist" Jan 21 17:06:25 crc kubenswrapper[4760]: I0121 17:06:25.635779 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" path="/var/lib/kubelet/pods/3ea00829-01aa-4875-a7c8-93efd9232980/volumes" Jan 21 17:06:31 crc kubenswrapper[4760]: I0121 17:06:31.623163 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:31 crc kubenswrapper[4760]: E0121 17:06:31.623785 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:46 crc kubenswrapper[4760]: I0121 17:06:46.623242 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:46 crc kubenswrapper[4760]: E0121 17:06:46.624133 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:06:59 crc kubenswrapper[4760]: I0121 17:06:59.628618 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:06:59 crc kubenswrapper[4760]: E0121 17:06:59.629843 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:13 crc kubenswrapper[4760]: I0121 17:07:13.627376 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:13 crc kubenswrapper[4760]: E0121 17:07:13.628277 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:28 crc kubenswrapper[4760]: I0121 17:07:28.622776 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:28 crc kubenswrapper[4760]: E0121 17:07:28.624870 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:40 crc kubenswrapper[4760]: I0121 17:07:40.623340 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:40 crc kubenswrapper[4760]: E0121 17:07:40.624212 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:07:53 crc kubenswrapper[4760]: I0121 17:07:53.627298 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:07:53 crc kubenswrapper[4760]: E0121 17:07:53.628778 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:08:04 crc kubenswrapper[4760]: I0121 17:08:04.623431 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:08:04 crc kubenswrapper[4760]: E0121 17:08:04.624163 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.614109 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:08 crc kubenswrapper[4760]: E0121 17:08:08.615128 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-content" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615142 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-content" Jan 21 17:08:08 crc kubenswrapper[4760]: E0121 17:08:08.615161 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-utilities" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615169 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="extract-utilities" Jan 21 17:08:08 crc kubenswrapper[4760]: E0121 17:08:08.615186 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615192 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.615422 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea00829-01aa-4875-a7c8-93efd9232980" containerName="registry-server" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.616905 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.627820 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.701413 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.701641 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.701759 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.803836 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.804596 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.805126 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.805228 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.805720 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.823771 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"redhat-operators-7zwbj\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:08 crc kubenswrapper[4760]: I0121 17:08:08.955124 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:09 crc kubenswrapper[4760]: I0121 17:08:09.470669 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:10 crc kubenswrapper[4760]: I0121 17:08:10.007284 4760 generic.go:334] "Generic (PLEG): container finished" podID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" exitCode=0 Jan 21 17:08:10 crc kubenswrapper[4760]: I0121 17:08:10.007361 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c"} Jan 21 17:08:10 crc kubenswrapper[4760]: I0121 17:08:10.007711 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerStarted","Data":"5adf5fc41b82ce32ced4fed97eeab3c4292ce1ef4540fe625f97861d4a97031f"} Jan 21 17:08:12 crc kubenswrapper[4760]: I0121 17:08:12.026208 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerStarted","Data":"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd"} Jan 21 17:08:13 crc kubenswrapper[4760]: I0121 17:08:13.039584 4760 generic.go:334] "Generic (PLEG): container finished" podID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" exitCode=0 Jan 21 17:08:13 crc kubenswrapper[4760]: I0121 17:08:13.039637 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd"} Jan 21 17:08:15 crc kubenswrapper[4760]: I0121 17:08:15.060037 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerStarted","Data":"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a"} Jan 21 17:08:15 crc kubenswrapper[4760]: I0121 17:08:15.089097 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7zwbj" podStartSLOduration=3.666522064 podStartE2EDuration="7.08907816s" podCreationTimestamp="2026-01-21 17:08:08 +0000 UTC" firstStartedPulling="2026-01-21 17:08:10.009188099 +0000 UTC m=+4860.676957667" lastFinishedPulling="2026-01-21 17:08:13.431744175 +0000 UTC m=+4864.099513763" observedRunningTime="2026-01-21 17:08:15.084215859 +0000 UTC m=+4865.751985447" watchObservedRunningTime="2026-01-21 17:08:15.08907816 +0000 UTC m=+4865.756847738" Jan 21 17:08:18 crc kubenswrapper[4760]: I0121 17:08:18.622175 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:08:18 crc kubenswrapper[4760]: E0121 17:08:18.623429 4760 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lp9r_openshift-machine-config-operator(5dd365e7-570c-4130-a299-30e376624ce2)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" Jan 21 17:08:18 crc kubenswrapper[4760]: I0121 17:08:18.956219 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:18 crc kubenswrapper[4760]: I0121 17:08:18.956911 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:20 crc kubenswrapper[4760]: I0121 17:08:20.012058 4760 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7zwbj" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" probeResult="failure" output=< Jan 21 17:08:20 crc kubenswrapper[4760]: timeout: failed to connect service ":50051" within 1s Jan 21 17:08:20 crc kubenswrapper[4760]: > Jan 21 17:08:29 crc kubenswrapper[4760]: I0121 17:08:29.030734 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:29 crc kubenswrapper[4760]: I0121 17:08:29.118653 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:29 crc kubenswrapper[4760]: I0121 17:08:29.282291 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:30 crc kubenswrapper[4760]: I0121 17:08:30.231665 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7zwbj" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" containerID="cri-o://01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" gracePeriod=2 Jan 21 17:08:30 crc kubenswrapper[4760]: I0121 17:08:30.624705 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:08:30 crc kubenswrapper[4760]: I0121 17:08:30.862082 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000096 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") pod \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000494 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") pod \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000808 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") pod \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\" (UID: \"e016a18d-5ea2-4cdd-8b6c-b97258d99902\") " Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.000943 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities" (OuterVolumeSpecName: "utilities") pod "e016a18d-5ea2-4cdd-8b6c-b97258d99902" (UID: "e016a18d-5ea2-4cdd-8b6c-b97258d99902"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.001519 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.021866 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28" (OuterVolumeSpecName: "kube-api-access-sgb28") pod "e016a18d-5ea2-4cdd-8b6c-b97258d99902" (UID: "e016a18d-5ea2-4cdd-8b6c-b97258d99902"). InnerVolumeSpecName "kube-api-access-sgb28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.103476 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgb28\" (UniqueName: \"kubernetes.io/projected/e016a18d-5ea2-4cdd-8b6c-b97258d99902-kube-api-access-sgb28\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.152108 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e016a18d-5ea2-4cdd-8b6c-b97258d99902" (UID: "e016a18d-5ea2-4cdd-8b6c-b97258d99902"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.205556 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e016a18d-5ea2-4cdd-8b6c-b97258d99902-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.249306 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"81cf80092b7f63438cedfb66d8dfa60908df256d2711669353e781999c840011"} Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.255785 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7zwbj" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.255802 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a"} Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.256200 4760 scope.go:117] "RemoveContainer" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.255631 4760 generic.go:334] "Generic (PLEG): container finished" podID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" exitCode=0 Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.257896 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7zwbj" event={"ID":"e016a18d-5ea2-4cdd-8b6c-b97258d99902","Type":"ContainerDied","Data":"5adf5fc41b82ce32ced4fed97eeab3c4292ce1ef4540fe625f97861d4a97031f"} Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.303294 4760 scope.go:117] "RemoveContainer" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.348281 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.360783 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7zwbj"] Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.364428 4760 scope.go:117] "RemoveContainer" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.419335 4760 scope.go:117] "RemoveContainer" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" Jan 21 17:08:31 crc kubenswrapper[4760]: E0121 17:08:31.420224 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a\": container with ID starting with 01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a not found: ID does not exist" containerID="01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420340 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a"} err="failed to get container status \"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a\": rpc error: code = NotFound desc = could not find container \"01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a\": container with ID starting with 01b5894cd722edde65859fad956d524b65baa026a691610867c5cd67f38dce0a not found: ID does not exist" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420395 4760 scope.go:117] "RemoveContainer" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" Jan 21 17:08:31 crc kubenswrapper[4760]: E0121 17:08:31.420743 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd\": container with ID starting with e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd not found: ID does not exist" containerID="e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420774 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd"} err="failed to get container status \"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd\": rpc error: code = NotFound desc = could not find container \"e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd\": container with ID starting with e2cd870e900d0e24b65ad1cca4e5924c4804cb9aff4e945150ca4b331b696bcd not found: ID does not exist" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.420792 4760 scope.go:117] "RemoveContainer" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" Jan 21 17:08:31 crc kubenswrapper[4760]: E0121 17:08:31.421114 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c\": container with ID starting with c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c not found: ID does not exist" containerID="c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.421141 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c"} err="failed to get container status \"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c\": rpc error: code = NotFound desc = could not find container \"c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c\": container with ID starting with c10c024e640d0a948dae0f913863cd8856d64a915cbd9fb7f3c5fd2d512d890c not found: ID does not exist" Jan 21 17:08:31 crc kubenswrapper[4760]: I0121 17:08:31.634501 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" path="/var/lib/kubelet/pods/e016a18d-5ea2-4cdd-8b6c-b97258d99902/volumes" Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.729345 4760 generic.go:334] "Generic (PLEG): container finished" podID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" exitCode=0 Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.729436 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n76vm/must-gather-hpmlt" event={"ID":"bcdeb98a-d5e9-441e-914e-7b995f026bd4","Type":"ContainerDied","Data":"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282"} Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.730587 4760 scope.go:117] "RemoveContainer" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:24 crc kubenswrapper[4760]: I0121 17:09:24.783278 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76vm_must-gather-hpmlt_bcdeb98a-d5e9-441e-914e-7b995f026bd4/gather/0.log" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.158116 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.158988 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-n76vm/must-gather-hpmlt" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" containerID="cri-o://fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" gracePeriod=2 Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.167979 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n76vm/must-gather-hpmlt"] Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.625777 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76vm_must-gather-hpmlt_bcdeb98a-d5e9-441e-914e-7b995f026bd4/copy/0.log" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.626877 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.715439 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") pod \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.715691 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") pod \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\" (UID: \"bcdeb98a-d5e9-441e-914e-7b995f026bd4\") " Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.751211 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt" (OuterVolumeSpecName: "kube-api-access-2pdlt") pod "bcdeb98a-d5e9-441e-914e-7b995f026bd4" (UID: "bcdeb98a-d5e9-441e-914e-7b995f026bd4"). InnerVolumeSpecName "kube-api-access-2pdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.817957 4760 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n76vm_must-gather-hpmlt_bcdeb98a-d5e9-441e-914e-7b995f026bd4/copy/0.log" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818293 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pdlt\" (UniqueName: \"kubernetes.io/projected/bcdeb98a-d5e9-441e-914e-7b995f026bd4-kube-api-access-2pdlt\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818464 4760 generic.go:334] "Generic (PLEG): container finished" podID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" exitCode=143 Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818514 4760 scope.go:117] "RemoveContainer" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.818636 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n76vm/must-gather-hpmlt" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.850646 4760 scope.go:117] "RemoveContainer" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.920657 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bcdeb98a-d5e9-441e-914e-7b995f026bd4" (UID: "bcdeb98a-d5e9-441e-914e-7b995f026bd4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.921192 4760 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcdeb98a-d5e9-441e-914e-7b995f026bd4-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.958466 4760 scope.go:117] "RemoveContainer" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" Jan 21 17:09:34 crc kubenswrapper[4760]: E0121 17:09:34.958881 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2\": container with ID starting with fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2 not found: ID does not exist" containerID="fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.958947 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2"} err="failed to get container status \"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2\": rpc error: code = NotFound desc = could not find container \"fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2\": container with ID starting with fa9c83421ef5e7c1de28842bee8465dc922b2c02a1bbb9a063a500c5e99e63c2 not found: ID does not exist" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.958992 4760 scope.go:117] "RemoveContainer" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:34 crc kubenswrapper[4760]: E0121 17:09:34.959317 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282\": container with ID starting with e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282 not found: ID does not exist" containerID="e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282" Jan 21 17:09:34 crc kubenswrapper[4760]: I0121 17:09:34.959367 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282"} err="failed to get container status \"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282\": rpc error: code = NotFound desc = could not find container \"e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282\": container with ID starting with e9874a212c8ae576e4e4159ddfbdd3e6e10465e29cbe88a24e3718402612a282 not found: ID does not exist" Jan 21 17:09:35 crc kubenswrapper[4760]: I0121 17:09:35.634388 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" path="/var/lib/kubelet/pods/bcdeb98a-d5e9-441e-914e-7b995f026bd4/volumes" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.272575 4760 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273502 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-utilities" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273515 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-utilities" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273525 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-content" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273531 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="extract-content" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273548 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273554 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273571 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="gather" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273578 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="gather" Jan 21 17:10:22 crc kubenswrapper[4760]: E0121 17:10:22.273598 4760 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273604 4760 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273775 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="copy" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273789 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="e016a18d-5ea2-4cdd-8b6c-b97258d99902" containerName="registry-server" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.273802 4760 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdeb98a-d5e9-441e-914e-7b995f026bd4" containerName="gather" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.275233 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.282398 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.369130 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.369186 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.369217 4760 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.471149 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.471521 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.471557 4760 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.472022 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.472087 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.492046 4760 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"redhat-marketplace-6r5nr\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:22 crc kubenswrapper[4760]: I0121 17:10:22.593723 4760 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:23 crc kubenswrapper[4760]: I0121 17:10:23.118944 4760 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.003743 4760 generic.go:334] "Generic (PLEG): container finished" podID="80218623-087f-4287-b59a-93feb3f02013" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" exitCode=0 Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.003863 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100"} Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.004056 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerStarted","Data":"fe3527ef2727ac02826c4cf69ddcf73b4be956c17b03acc3ab0125100e8891cf"} Jan 21 17:10:24 crc kubenswrapper[4760]: I0121 17:10:24.005991 4760 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 17:10:25 crc kubenswrapper[4760]: I0121 17:10:25.017738 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerStarted","Data":"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303"} Jan 21 17:10:26 crc kubenswrapper[4760]: I0121 17:10:26.027420 4760 generic.go:334] "Generic (PLEG): container finished" podID="80218623-087f-4287-b59a-93feb3f02013" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" exitCode=0 Jan 21 17:10:26 crc kubenswrapper[4760]: I0121 17:10:26.027471 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303"} Jan 21 17:10:27 crc kubenswrapper[4760]: I0121 17:10:27.037410 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerStarted","Data":"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a"} Jan 21 17:10:27 crc kubenswrapper[4760]: I0121 17:10:27.060018 4760 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6r5nr" podStartSLOduration=2.585801251 podStartE2EDuration="5.059997729s" podCreationTimestamp="2026-01-21 17:10:22 +0000 UTC" firstStartedPulling="2026-01-21 17:10:24.005722491 +0000 UTC m=+4994.673492069" lastFinishedPulling="2026-01-21 17:10:26.479918949 +0000 UTC m=+4997.147688547" observedRunningTime="2026-01-21 17:10:27.054857081 +0000 UTC m=+4997.722626669" watchObservedRunningTime="2026-01-21 17:10:27.059997729 +0000 UTC m=+4997.727767307" Jan 21 17:10:32 crc kubenswrapper[4760]: I0121 17:10:32.594773 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:32 crc kubenswrapper[4760]: I0121 17:10:32.595424 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:32 crc kubenswrapper[4760]: I0121 17:10:32.675104 4760 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:33 crc kubenswrapper[4760]: I0121 17:10:33.162775 4760 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:33 crc kubenswrapper[4760]: I0121 17:10:33.220967 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.129743 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6r5nr" podUID="80218623-087f-4287-b59a-93feb3f02013" containerName="registry-server" containerID="cri-o://40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" gracePeriod=2 Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.578842 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.773633 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") pod \"80218623-087f-4287-b59a-93feb3f02013\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.773679 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") pod \"80218623-087f-4287-b59a-93feb3f02013\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.773706 4760 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") pod \"80218623-087f-4287-b59a-93feb3f02013\" (UID: \"80218623-087f-4287-b59a-93feb3f02013\") " Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.774888 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities" (OuterVolumeSpecName: "utilities") pod "80218623-087f-4287-b59a-93feb3f02013" (UID: "80218623-087f-4287-b59a-93feb3f02013"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.780977 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz" (OuterVolumeSpecName: "kube-api-access-hwhxz") pod "80218623-087f-4287-b59a-93feb3f02013" (UID: "80218623-087f-4287-b59a-93feb3f02013"). InnerVolumeSpecName "kube-api-access-hwhxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.807488 4760 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80218623-087f-4287-b59a-93feb3f02013" (UID: "80218623-087f-4287-b59a-93feb3f02013"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.876729 4760 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.876792 4760 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80218623-087f-4287-b59a-93feb3f02013-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:35 crc kubenswrapper[4760]: I0121 17:10:35.876809 4760 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhxz\" (UniqueName: \"kubernetes.io/projected/80218623-087f-4287-b59a-93feb3f02013-kube-api-access-hwhxz\") on node \"crc\" DevicePath \"\"" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.149815 4760 generic.go:334] "Generic (PLEG): container finished" podID="80218623-087f-4287-b59a-93feb3f02013" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" exitCode=0 Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.149853 4760 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6r5nr" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.149879 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a"} Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.150176 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6r5nr" event={"ID":"80218623-087f-4287-b59a-93feb3f02013","Type":"ContainerDied","Data":"fe3527ef2727ac02826c4cf69ddcf73b4be956c17b03acc3ab0125100e8891cf"} Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.150205 4760 scope.go:117] "RemoveContainer" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.173981 4760 scope.go:117] "RemoveContainer" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.187985 4760 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.197646 4760 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6r5nr"] Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.208948 4760 scope.go:117] "RemoveContainer" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.243409 4760 scope.go:117] "RemoveContainer" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" Jan 21 17:10:36 crc kubenswrapper[4760]: E0121 17:10:36.243934 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a\": container with ID starting with 40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a not found: ID does not exist" containerID="40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.243999 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a"} err="failed to get container status \"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a\": rpc error: code = NotFound desc = could not find container \"40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a\": container with ID starting with 40419840b2b3638df336be3cc59afdcec408cf57d65b2d0a2821e2db3235a58a not found: ID does not exist" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.244037 4760 scope.go:117] "RemoveContainer" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" Jan 21 17:10:36 crc kubenswrapper[4760]: E0121 17:10:36.245103 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303\": container with ID starting with 6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303 not found: ID does not exist" containerID="6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.245137 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303"} err="failed to get container status \"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303\": rpc error: code = NotFound desc = could not find container \"6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303\": container with ID starting with 6df9691bf02fdb243c8559977123fcaee2ddb71615b8af353ccb11067fb86303 not found: ID does not exist" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.245159 4760 scope.go:117] "RemoveContainer" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" Jan 21 17:10:36 crc kubenswrapper[4760]: E0121 17:10:36.245972 4760 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100\": container with ID starting with ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100 not found: ID does not exist" containerID="ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100" Jan 21 17:10:36 crc kubenswrapper[4760]: I0121 17:10:36.246068 4760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100"} err="failed to get container status \"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100\": rpc error: code = NotFound desc = could not find container \"ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100\": container with ID starting with ca0960498dea39275fbb3e51384bcd04e12b337cab9b90118ed8f3450bedb100 not found: ID does not exist" Jan 21 17:10:37 crc kubenswrapper[4760]: I0121 17:10:37.634632 4760 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80218623-087f-4287-b59a-93feb3f02013" path="/var/lib/kubelet/pods/80218623-087f-4287-b59a-93feb3f02013/volumes" Jan 21 17:10:50 crc kubenswrapper[4760]: I0121 17:10:50.946433 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:10:50 crc kubenswrapper[4760]: I0121 17:10:50.946934 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:20 crc kubenswrapper[4760]: I0121 17:11:20.946003 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:11:20 crc kubenswrapper[4760]: I0121 17:11:20.946624 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:50 crc kubenswrapper[4760]: I0121 17:11:50.946673 4760 patch_prober.go:28] interesting pod/machine-config-daemon-5lp9r container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 17:11:50 crc kubenswrapper[4760]: I0121 17:11:50.947192 4760 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 17:11:50 crc kubenswrapper[4760]: I0121 17:11:50.947242 4760 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" Jan 21 17:11:50 crc kubenswrapper[4760]: I0121 17:11:50.947960 4760 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81cf80092b7f63438cedfb66d8dfa60908df256d2711669353e781999c840011"} pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 17:11:50 crc kubenswrapper[4760]: I0121 17:11:50.948005 4760 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" podUID="5dd365e7-570c-4130-a299-30e376624ce2" containerName="machine-config-daemon" containerID="cri-o://81cf80092b7f63438cedfb66d8dfa60908df256d2711669353e781999c840011" gracePeriod=600 Jan 21 17:11:51 crc kubenswrapper[4760]: I0121 17:11:51.882475 4760 generic.go:334] "Generic (PLEG): container finished" podID="5dd365e7-570c-4130-a299-30e376624ce2" containerID="81cf80092b7f63438cedfb66d8dfa60908df256d2711669353e781999c840011" exitCode=0 Jan 21 17:11:51 crc kubenswrapper[4760]: I0121 17:11:51.882534 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerDied","Data":"81cf80092b7f63438cedfb66d8dfa60908df256d2711669353e781999c840011"} Jan 21 17:11:51 crc kubenswrapper[4760]: I0121 17:11:51.882865 4760 scope.go:117] "RemoveContainer" containerID="487ee7a25796844d3241333d83e375f710897deb4616727fe5bac585a055bf45" Jan 21 17:11:52 crc kubenswrapper[4760]: I0121 17:11:52.910407 4760 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lp9r" event={"ID":"5dd365e7-570c-4130-a299-30e376624ce2","Type":"ContainerStarted","Data":"5809c71ab3c6ef748aebd4d64a3a2923d746920192f1fd2fd4ea4f1017fd4dd2"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134204423024443 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134204424017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134172071016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134172072015457 5ustar corecore